[openstack-dev] Havana Release V3 Extensions and new features to quota

Vishvananda Ishaya vishvananda at gmail.com
Wed Jan 29 17:47:28 UTC 2014


On Jan 29, 2014, at 8:56 AM, Vinod Kumar Boppanna <vinod.kumar.boppanna at cern.ch> wrote:

> Hi,
> 
> Correct me if i am wrong. I thought when i run some nova commands or API's using the token generated by authenticating with keystone, nova itself will first try to talk to keystone to validate this token. Otherwise how can nova find the token is valid one?

It communicates if UUID tokens are enabled. Otherwise it just validates the signature.
> 
> If this is case, then when a URL is run by the user, then can we do in addition to validate the token, ask the keystone whether the user is admin or not and if not, then check that the user requesting some information for a domain and a project is part of that domain and project. 

This is fundamentally different from how authorization works today. Roles are passed from keystone and policy is enforced in the service. We could pass a role representing the domain “CompanyAAdmin” but this doesn’t allow us to do any validation of whether projectX is part of CompanyAAdmin. There has been some discussion to adding something like user at domain to the data passed around. Adding the same for projects would be one option and is quite similar to the heirarchical model i propose in the wiki.

> Also, currently domain concept is not being used and so all grouping is done at the project level. But once Domain features comes alive, then in the authentication whether it is really required for the user to provide the tenant name also.  For example, to generate the token using curl commands with keystone, the request JSON file is like this.
> 
> {
>    "auth": {
>        "identity": {
>            "methods": [
>                "password"
>            ],
>            "password": {
>                "user": {
>                    "domain": {
>                        "name": "default"   --->  This can be there as same user can be part of different domain and with different passwords
>                    },
>                    "name": "<user_name>",
>                    "password": "<password>"
>                }
>            }
>        },
>        "scope": {
>            "project": {
>                "domain": {
>                    "name": "default"
>                },
>                "name": "admin"    ------> Why i need this scope, because in the URL i will anyway mention the tenant id for which i am requesting for information.

URLs will not contain tenant/project ids in the future (and doesn’t today in some projects), so this is still necessary. The project name sets the scope of the token.
>            }
>        }
>    }
> }
> 
> or 
> export OS_USERNAME=<user_name>
> export OS_TENANT_NAME=admin  -------> same why i need a tenant name....Instead i need a domain name for getting authenticated 
same as above

> export OS_PASSWORD=<password>
> export OS_AUTH_URL=http://<ip>:35357/v2.0/
> export PS1='[\u@\h \W(keystone_admin)]\$ '
> 
> Like for requesting the quota information, we are now using "v2/{tenant_id}/os-quota-sets/{tenant_id}". Here i am actually using tenant_id two times, one of the requesting user tenant id and the second one the tenant id for which the information is requested. Instead of this, why can't i use "v2/os-quota-sets/{tenant_id}" after getting authenticated with username and password in a domain. Then nova can ask keystone, whether the user requesting is part of the "tenant_id" and also the domain id of this tenant and the user domain id (for which the token has been generated)  are matching. If not, then do not allow the query.

Agreed the first tenant id will be removed from the nova v3 api. Unfortunately you have a basic understanding of how keystone authz works. The model you propose above where keystone has to arbitrate on ownership does not scale, particularly for things like object storage which has potentially billions of objects.

> 
> I think, when domains get introduced without changing the way the current things work, it will become quite complex. For example, in the quota APIs, if the domain id is also incorporated, then the URL's may became big or complex.

Please attend the meeting and I will explain the simplified approach I am considering.

Vish

> 
> Cheers,
> Vinod Kumar Boppanna
> 
> 
> 
> 
> ________________________________________
> From: openstack-dev-request at lists.openstack.org [openstack-dev-request at lists.openstack.org]
> Sent: 29 January 2014 17:03
> To: openstack-dev at lists.openstack.org
> Subject: OpenStack-dev Digest, Vol 21, Issue 92
> 
> Send OpenStack-dev mailing list submissions to
>        openstack-dev at lists.openstack.org
> 
> To subscribe or unsubscribe via the World Wide Web, visit
>        http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> or, via email, send a message with subject or body 'help' to
>        openstack-dev-request at lists.openstack.org
> 
> You can reach the person managing the list at
>        openstack-dev-owner at lists.openstack.org
> 
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of OpenStack-dev digest..."
> 
> 
> Today's Topics:
> 
>   1. Re: [all] Lots of gating failures because of      testtools
>      (Sylvain Bauza)
>   2. Re: Hierarchicical Multitenancy Discussion (Ulrich Schwickerath)
>   3. Re: [all] Lots of gating failures because of      testtools
>      (Davanum Srinivas)
>   4. [Nova][Scheduler] Policy Based Scheduler and Solver       Scheduler
>      (Khanh-Toan Tran)
>   5. Re: [Neutron]Contributing code to Neutron (ML2)
>      (Rossella Sblendido)
>   6. [Neutron] ryu-ml2-driver (YAMAMOTO Takashi)
>   7. Re: [nova][swift] Importing Launchpad Answers in  Ask
>      OpenStack (Swapnil Kulkarni)
>   8. Re: [Keystone] - Cloud federation on top of the   Apache
>      (Marek Denis)
>   9. Re: [oslo] log message translations (Doug Hellmann)
>  10. Re: [Nova] bp proposal: discovery of peer instances through
>      metadata service (Justin Santa Barbara)
>  11. [neutron] [ml2] The impending plethora of ML2
>      MechanismDrivers (Kyle Mestery)
>  12. Re: [savanna] How to handle diverging EDP job configuration
>      settings (Trevor McKay)
>  13. Re: [savanna] How to handle diverging EDP job configuration
>      settings (Trevor McKay)
>  14. Re: [savanna] How to handle diverging EDP job     configuration
>      settings (Jon Maron)
>  15. Re: [nova][neutron] PCI pass-through SRIOV on Jan. 29th
>      (Robert Li (baoli))
>  16. Nova V2 Quota API (Vinod Kumar Boppanna)
>  17. Re: Hierarchicical Multitenancy Discussion (Telles Nobrega)
>  18. Re: Nova V2 Quota API (Yingjun Li)
>  19. Re: [Ironic][Ceilometer]bp:send-data-to-ceilometer (Gordon Chung)
>  20. Re: [nova][neutron] PCI pass-through SRIOV on Jan. 29th
>      (Irena Berezovsky)
>  21. Re: Nova V2 Quota API (Anne Gentle)
>  22. Re: Havana Release V3 Extensions and new features to quota
>      (Vishvananda Ishaya)
>  23. Re: [nova] [neutron] PCI pass-through network support
>      (Robert Li (baoli))
>  24. Re: [savanna] How to handle diverging EDP job configuration
>      settings (Sergey Lukjanov)
>  25. Re: Nova V2 Quota API (Yingjun Li)
>  26. [nova][neutron][ml2] Proposal to support VIF security,
>      PCI-passthru/SR-IOV, and other binding-specific data (Robert Kukura)
>  27. [Heat] [Nova] [oslo] [Ceilometer] about notifications : huge
>      and may be non secure (Swann Croiset)
>  28. Re: [Nova] bp proposal: discovery of peer instances       through
>      metadata service (Vishvananda Ishaya)
>  29. Re: Hierarchicical Multitenancy Discussion (Vishvananda Ishaya)
>  30. Re: [nova][neutron] PCI pass-through SRIOV on Jan. 29th
>      (Robert Li (baoli))
> 
> 
> ----------------------------------------------------------------------
> 
> Message: 1
> Date: Wed, 29 Jan 2014 13:04:01 +0100
> From: Sylvain Bauza <sylvain.bauza at bull.net>
> To: <openstack-dev at lists.openstack.org>
> Subject: Re: [openstack-dev] [all] Lots of gating failures because of
>        testtools
> Message-ID: <52E8EE31.9000805 at bull.net>
> Content-Type: text/plain; charset="ISO-8859-1"; format=flowed
> 
> Le 29/01/2014 12:51, Sean Dague a ?crit :
>> Right, but until a testtools fix is released, it won't pass. So please
>> no rechecks until we have a new testtools from Robert that fixes things.
>> 
>>      -Sean
> 
> Indeed you're right. Any way to promote some bugs with Gerrit without
> doing a recheck, then ?
> 
> -Sylvain
> 
> 
> 
> ------------------------------
> 
> Message: 2
> Date: Wed, 29 Jan 2014 13:14:53 +0100
> From: Ulrich Schwickerath <ulrich.schwickerath at cern.ch>
> To: "OpenStack Development Mailing List (not for usage questions)"
>        <openstack-dev at lists.openstack.org>
> Subject: Re: [openstack-dev] Hierarchicical Multitenancy Discussion
> Message-ID: <52E8F0BD.7070608 at cern.ch>
> Content-Type: text/plain; charset="UTF-8"; format=flowed
> 
> Hi,
> 
> I'm working with Vinod. We'd like to join as well. Same issue on our
> side: 16:00 UTC is better for us.
> 
> Ulrich and Vinod
> 
> On 29.01.2014 10:56, Florent Flament wrote:
>> Hi Vishvananda,
>> 
>> I would be interested in such a working group.
>> Can you please confirm the meeting hour for this Friday ?
>> I've seen 1600 UTC in your email and 2100 UTC in the wiki (https://wiki.openstack.org/wiki/Meetings#Hierarchical_Multitenancy_Meeting ). As I'm in Europe I'd prefer 1600 UTC.
>> 
>> Florent Flament
>> 
>> ----- Original Message -----
>> From: "Vishvananda Ishaya" <vishvananda at gmail.com>
>> To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org>
>> Sent: Tuesday, January 28, 2014 7:35:15 PM
>> Subject: [openstack-dev] Hierarchicical Multitenancy Discussion
>> 
>> Hi Everyone,
>> 
>> I apologize for the obtuse title, but there isn't a better succinct term to describe what is needed. OpenStack has no support for multiple owners of objects. This means that a variety of private cloud use cases are simply not supported. Specifically, objects in the system can only be managed on the tenant level or globally.
>> 
>> The key use case here is to delegate administration rights for a group of tenants to a specific user/role. There is something in Keystone called a ?domain? which supports part of this functionality, but without support from all of the projects, this concept is pretty useless.
>> 
>> In IRC today I had a brief discussion about how we could address this. I have put some details and a straw man up here:
>> 
>> https://wiki.openstack.org/wiki/HierarchicalMultitenancy
>> 
>> I would like to discuss this strawman and organize a group of people to get actual work done by having an irc meeting this Friday at 1600UTC. I know this time is probably a bit tough for Europe, so if we decide we need a regular meeting to discuss progress then we can vote on a better time for this meeting.
>> 
>> https://wiki.openstack.org/wiki/Meetings#Hierarchical_Multitenancy_Meeting
>> 
>> Please note that this is going to be an active team that produces code. We will *NOT* spend a lot of time debating approaches, and instead focus on making something that works and learning as we go. The output of this team will be a MultiTenant devstack install that actually works, so that we can ensure the features we are adding to each project work together.
>> 
>> Vish
>> 
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> 
> ------------------------------
> 
> Message: 3
> Date: Wed, 29 Jan 2014 07:23:14 -0500
> From: Davanum Srinivas <davanum at gmail.com>
> To: "OpenStack Development Mailing List (not for usage questions)"
>        <openstack-dev at lists.openstack.org>
> Subject: Re: [openstack-dev] [all] Lots of gating failures because of
>        testtools
> Message-ID:
>        <CANw6fcHy77A1qragbnLA9-fF-mimCimeSTzQ8+48mXGBHF=ZBw at mail.gmail.com>
> Content-Type: text/plain; charset=ISO-8859-1
> 
> Robert,
> 
> Here's a merge request for subunit
> https://code.launchpad.net/~subunit/subunit/trunk/+merge/203723
> 
> -- dims
> 
> On Wed, Jan 29, 2014 at 6:51 AM, Sean Dague <sean at dague.net> wrote:
>> On 01/29/2014 06:24 AM, Sylvain Bauza wrote:
>>> Le 29/01/2014 12:07, Ivan Melnikov a ?crit :
>>>> I also filed a bug for taskflow, feel free to add your projects there if
>>>> it's affected, too: https://bugs.launchpad.net/taskflow/+bug/1274050
>>>> 
>>> 
>>> 
>>> Climate is also impacted, we can at least declare a recheck with this
>>> bug number.
>>> -Sylvain
>> 
>> Right, but until a testtools fix is released, it won't pass. So please
>> no rechecks until we have a new testtools from Robert that fixes things.
>> 
>>        -Sean
>> 
>> --
>> Sean Dague
>> Samsung Research America
>> sean at dague.net / sean.dague at samsung.com
>> http://dague.net
>> 
>> 
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
> 
> 
> 
> --
> Davanum Srinivas :: http://davanum.wordpress.com
> 
> 
> 
> ------------------------------
> 
> Message: 4
> Date: Wed, 29 Jan 2014 12:25:14 +0000 (UTC)
> From: Khanh-Toan Tran <khanh-toan.tran at cloudwatt.com>
> To: "'OpenStack Development Mailing List \(not for usage questions\)'"
>        <openstack-dev at lists.openstack.org>
> Subject: [openstack-dev] [Nova][Scheduler] Policy Based Scheduler and
>        Solver  Scheduler
> Message-ID: <29249ed2.0000144c.0000000a at cw-lap-8TWXLX1>
> Content-Type: text/plain;       charset="us-ascii"
> 
> Dear all,
> 
> As promised in the Scheduler/Gantt meeting, here is our analysis on the
> connection between Policy Based Scheduler and Solver Scheduler:
> 
> https://docs.google.com/document/d/1RfP7jRsw1mXMjd7in72ARjK0fTrsQv1bqolOri
> IQB2Y
> 
> This document briefs the mechanism of the two schedulers and the
> possibility of cooperation. It is my personal point of view only.
> 
> In a nutshell, Policy Based Scheduler allows admin to define policies for
> different physical resources (an aggregate, an availability-zone, or all
> infrastructure) or different (classes of) users. Admin can modify
> (add/remove/modify) any policy in runtime, and the modification effect is
> only in the target (e.g. the aggregate, the users) that the policy is
> defined to. Solver Scheduler solves the placement of groups of instances
> simultaneously by putting all the known information into a integer linear
> system and uses Integer Program solver to solve the latter. Thus relation
> between VMs and between VMs-computes are all accounted for.
> 
> If working together, Policy Based Scheduler can supply the filters and
> weighers following the policies rules defined for different computes.
> These filters and weighers can be converted into constraints & cost
> function for Solver Scheduler to solve. More detailed will be found in the
> doc.
> 
> I look forward for comments and hope that we can work it out.
> 
> Best regards,
> 
> Khanh-Toan TRAN
> 
> 
> 
> 
> ------------------------------
> 
> Message: 5
> Date: Wed, 29 Jan 2014 13:41:01 +0100
> From: Rossella Sblendido <rossella at midokura.com>
> To: "OpenStack Development Mailing List (not for usage questions)"
>        <openstack-dev at lists.openstack.org>
> Subject: Re: [openstack-dev] [Neutron]Contributing code to Neutron
>        (ML2)
> Message-ID:
>        <CAOSL_f9RSvt693bDGgj5CCEAPh3RLp1CTe_ZuBvzztvzwGRrCg at mail.gmail.com>
> Content-Type: text/plain; charset="iso-8859-1"
> 
> Hi Trinath,
> 
> you can find more info about third party testing here [1]
> Every new driver or plugin is required to provide a testing system that
> will test new patches and post
> a +1/-1 to Gerrit .
> There were meetings organized by Kyle to talk about how to set up the
> system [2]
> It will probably help you if you read the logs of the meeting.
> 
> cheers,
> 
> Rossella
> 
> [1]
> https://wiki.openstack.org/wiki/Neutron_Plugins_and_Drivers#Testing_Requirements
> [2]
> http://lists.openstack.org/pipermail/openstack-dev/2013-December/021882.html
> 
> 
> On Wed, Jan 29, 2014 at 7:50 AM, trinath.somanchi at freescale.com <
> trinath.somanchi at freescale.com> wrote:
> 
>> Hi Akihiro-
>> 
>> What kind of third party testing is required?
>> 
>> I have written the driver, unit test case and checked the driver with
>> tempest testing.
>> 
>> Do I need to check with any other third party testing?
>> 
>> Kindly help me in this regard.
>> 
>> --
>> Trinath Somanchi - B39208
>> trinath.somanchi at freescale.com | extn: 4048
>> 
>> -----Original Message-----
>> From: Akihiro Motoki [mailto:motoki at da.jp.nec.com]
>> Sent: Friday, January 24, 2014 6:41 PM
>> To: openstack-dev at lists.openstack.org
>> Cc: kmestery at cisco.com
>> Subject: Re: [openstack-dev] [Neutron]Contributing code to Neutron (ML2)
>> 
>> Hi Trinath,
>> 
>> Jenkins is not directly related to proposing a new code.
>> The process to contribute the code is described in the links Andreas
>> pointed. There is no difference even if you are writing a new ML2 mech
>> driver.
>> 
>> In addition to the above, Neutron now requires a third party testing for
>> all new/existing plugins and drivers [1].
>> Are you talking about third party testing for your ML2 mechanism driver
>> when you say "Jenkins"?
>> 
>> Both two things can be done in parallel, but you need to make your third
>> party testing ready before merging your code into the master repository.
>> 
>> [1]
>> http://lists.openstack.org/pipermail/openstack-dev/2013-November/019219.html
>> 
>> Thanks,
>> Akihiro
>> 
>> (2014/01/24 21:42), trinath.somanchi at freescale.com wrote:
>>> Hi Andreas -
>>> 
>>> Thanks you for the reply.. It helped me understand the ground work
>>> required.
>>> 
>>> But then, I'm writing a new Mechanism driver (FSL SDN Mechanism
>>> driver) for ML2.
>>> 
>>> For submitting new file sets, can I go with GIT or require Jenkins for
>>> the adding the new code for review.
>>> 
>>> Kindly help me in this regard.
>>> 
>>> --
>>> Trinath Somanchi - B39208
>>> trinath.somanchi at freescale.com | extn: 4048
>>> 
>>> -----Original Message-----
>>> From: Andreas Jaeger [mailto:aj at suse.com]
>>> Sent: Friday, January 24, 2014 4:54 PM
>>> To: OpenStack Development Mailing List (not for usage questions)
>>> Cc: Kyle Mestery (kmestery)
>>> Subject: Re: [openstack-dev] [Neutron]Contributing code to Neutron
>>> (ML2)
>>> 
>>> On 01/24/2014 12:10 PM, trinath.somanchi at freescale.com wrote:
>>>> Hi-
>>>> 
>>>> 
>>>> 
>>>> Need support for ways to contribute code to Neutron regarding the ML2
>>>> Mechanism drivers.
>>>> 
>>>> 
>>>> 
>>>> I have installed Jenkins and created account in github and launchpad.
>>>> 
>>>> 
>>>> 
>>>> Kindly guide me on
>>>> 
>>>> 
>>>> 
>>>> [1] How to configure Jenkins to submit the code for review?
>>>> 
>>>> [2] What is the process involved in pushing the code base to the main
>>>> stream for icehouse release?
>>>> 
>>>> 
>>>> 
>>>> Kindly please help me understand the same..
>>> 
>>> Please read this wiki page completely, it explains the workflow we use.
>>> 
>>> https://wiki.openstack.org/wiki/GerritWorkflow
>>> 
>>> Please also read the general intro at
>>> https://wiki.openstack.org/wiki/HowToContribute
>>> 
>>> Btw. for submitting patches, you do not need a local Jenkins running,
>>> 
>>> Welcome to OpenStack, Kyle!
>>> 
>>> Andreas
>>> --
>>>  Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
>>>   SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 N?rnberg, Germany
>>>    GF: Jeff Hawn,Jennifer Guild,Felix Imend?rffer,HRB16746 (AG N?rnberg)
>>>     GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272
>>> A126
>>> 
>>> _______________________________________________
>>> OpenStack-dev mailing list
>>> OpenStack-dev at lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> 
>>> 
>>> 
>>> _______________________________________________
>>> OpenStack-dev mailing list
>>> OpenStack-dev at lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> 
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140129/3079b213/attachment-0001.html>
> 
> ------------------------------
> 
> Message: 6
> Date: Wed, 29 Jan 2014 21:43:27 +0900 (JST)
> From: yamamoto at valinux.co.jp (YAMAMOTO Takashi)
> To: openstack-dev at lists.openstack.org
> Subject: [openstack-dev] [Neutron] ryu-ml2-driver
> Message-ID: <20140129124327.AD2D971A8A at kuma.localdomain>
> Content-Type: Text/Plain; charset=us-ascii
> 
> hi,
> 
> we (Ryu project) are currently working on a new version of
> Ryu neutron plugin/agent.  we have a blueprint for it
> waiting for review/approval.  can you please take a look?  thanks.
> https://blueprints.launchpad.net/neutron/+spec/ryu-ml2-driver
> 
> YAMAMOTO Takashi
> 
> 
> 
> ------------------------------
> 
> Message: 7
> Date: Wed, 29 Jan 2014 18:19:04 +0530
> From: Swapnil Kulkarni <swapnilkulkarni2608 at gmail.com>
> To: "OpenStack Development Mailing List (not for usage questions)"
>        <openstack-dev at lists.openstack.org>
> Subject: Re: [openstack-dev] [nova][swift] Importing Launchpad Answers
>        in      Ask OpenStack
> Message-ID:
>        <CAN_H9Nj3gGg=m_ZSQ+dRG-oskLdiZ584pWruaX8CW1QrEAcfHA at mail.gmail.com>
> Content-Type: text/plain; charset="iso-8859-1"
> 
> Stef,
> 
> Getting lauchpad bus in ask.openstack would really help people and this
> looks really nice.(just saw some question-answers) I was not able to search
> for questions though and  (answered/unanswered) questions filters are not
> working. Just one small question, how the import will happen for future
> launchpad questions? Or launchpad questions will be disabled making
> ask.openstack default for openstack questions-answers?
> 
> 
> Best Regards,
> Swapnil
> *"It's better to SHARE"*
> 
> 
> 
> On Wed, Jan 29, 2014 at 1:13 PM, atul jha <stackeratul at gmail.com> wrote:
> 
>> 
>> 
>> 
>> On Wed, Jan 29, 2014 at 6:08 AM, Stefano Maffulli <stefano at openstack.org>wrote:
>> 
>>> Hello folks
>>> 
>>> we're almost ready to import all questions and asnwers from LP Answers
>>> into Ask OpenStack.  You can see the result of the import from Nova on
>>> the staging server http://ask-staging.openstack.org/
>>> 
>>> There are some formatting issues for the imported questions and I'm
>>> trying to evaluate how bad these are.  The questions I see are mostly
>>> readable and definitely pop up in search results, with their answers so
>>> they are valuable already as is. Some parts, especially the logs, may
>>> not look as good though. Fixing the parsers and get a better rendering
>>> for all imported questions would take an extra 3-5 days of work (maybe
>>> more) and I'm not sure it's worth it.
>>> 
>>> Please go ahead and browse the staging site and let me know what you
>>> think.
>>> 
>>> Cheers,
>>> stef
>>> 
>>> --
>>> Ask and answer questions on https://ask.openstack.org
>>> 
>>> _______________________________________________
>>> OpenStack-dev mailing list
>>> OpenStack-dev at lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> 
>> 
>> Great!!
>> 
>> Cheers!!
>> 
>> --
>> 
>> 
>> Atul Jha
>> http://atuljha.com
>> (irc.freenode.net:koolhead17)
>> 
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>> 
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140129/048cd3b8/attachment-0001.html>
> 
> ------------------------------
> 
> Message: 8
> Date: Wed, 29 Jan 2014 13:51:23 +0100
> From: Marek Denis <marek.denis at cern.ch>
> To: <openstack-dev at lists.openstack.org>
> Subject: Re: [openstack-dev] [Keystone] - Cloud federation on top of
>        the     Apache
> Message-ID: <52E8F94B.9070107 at cern.ch>
> Content-Type: text/plain; charset="ISO-8859-1"; format=flowed
> 
> On 28.01.2014 21:44, Adam Young wrote:
> 
>> To be clear, are you going to use mod_mellon as the Apache Auth module?
> 
> I am leaning towards mod_shib, as at least in theory it handles ECP
> extension. And I am not so sure mod_mellon does.
> 
> Adam, do you have at RedHat any experience with ECP SAML extensions or
> you used only webSSO?
> 
> --
> Marek Denis
> [marek.denis at cern.ch]
> 
> 
> 
> ------------------------------
> 
> Message: 9
> Date: Wed, 29 Jan 2014 08:05:11 -0500
> From: Doug Hellmann <doug.hellmann at dreamhost.com>
> To: Ben Nemec <openstack at nemebean.com>,  "OpenStack Development
>        Mailing List (not for usage questions)"
>        <openstack-dev at lists.openstack.org>, Ying Chun Guo
>        <guoyingc at cn.ibm.com>
> Subject: Re: [openstack-dev] [oslo] log message translations
> Message-ID:
>        <CADb+p3SQW8W7+vnjTyur0dTS2g0XB6iBQW-FA+8ct3TjL2Mm9w at mail.gmail.com>
> Content-Type: text/plain; charset="iso-8859-1"
> 
> On Tue, Jan 28, 2014 at 8:47 PM, Ben Nemec <openstack at nemebean.com> wrote:
> 
>> On 2014-01-27 11:42, Doug Hellmann wrote:
>> 
>> We have a blueprint open for separating translated log messages into
>> different domains so the translation team can prioritize them differently
>> (focusing on errors and warnings before debug messages, for example) [1].
>> Some concerns were raised related to the review [2], and I would like to
>> address those in this thread and see if we can reach consensus about how to
>> proceed.
>> 
>> The implementation in [2] provides a set of new marker functions similar
>> to _(), one for each log level (we have _LE, LW, _LI, _LD, etc.). These
>> would be used in conjunction with _(), and reserved for log messages.
>> Exceptions, API messages, and other user-facing messages all would still be
>> marked for translation with _() and would (I assume) receive the highest
>> priority work from the translation team.
>> 
>> When the string extraction CI job is updated, we will have one "main"
>> catalog for each app or library, and additional catalogs for the log
>> levels. Those show up in transifex separately, but will be named in a way
>> that they are obviously related. Each translation team will be able to
>> decide, based on the requirements of their users, how to set priorities for
>> translating the different catalogs.
>> 
>> Existing strings being sent to the log and marked with _() will be removed
>> from the main catalog and moved to the appropriate log-level-specific
>> catalog when their marker function is changed. My understanding is that
>> transifex is smart enough to recognize the same string from more than one
>> source, and to suggest previous translations when it sees the same text.
>> This should make it easier for the translation teams to "catch up" by
>> reusing the translations they have already done, in the new catalogs.
>> 
>> One concern that was raised was the need to mark all of the log messages
>> by hand. I investigated using extraction patterns like "LOG.debug(" and
>> "LOG.info(", but because of the way the translation actually works
>> internally we cannot do that. There are a few related reasons.
>> 
>> In other applications, the function _() translates a string at the point
>> where it is invoked, and returns a new string object. OpenStack has a
>> requirement that messages be translated multiple times, whether in the API
>> or the LOG (there is already support for logging in more than one language,
>> to different log files). This requirement means we delay the translation
>> operation until right before the string is output, at which time we know
>> the target language. We could update the log functions to create Message
>> objects dynamically, except...
>> 
>> Each app or library that uses the translation code will need its own
>> "domain" for the message catalogs. We get around that right now by not
>> translating many messages from the libraries, but that's obviously not what
>> we want long term (we at least want exceptions translated). If we had a
>> special version of a logger in oslo.log that knew how to create Message
>> objects for the format strings used in logging (the first argument to
>> LOG.debug for example), it would also have to know what translation domain
>> to use so the proper catalog could be loaded. The wrapper functions defined
>> in the patch [2] include this information, and can be updated to be
>> application or library specific when oslo.log eventually becomes its own
>> library.
>> 
>> Further, as part of moving the logging code from oslo-incubator to
>> oslo.log, and making our logging something we can use from other OpenStack
>> libraries, we are trying to change the implementation of the logging code
>> so it is no longer necessary to create loggers with our special wrapper
>> function. That would mean that oslo.log will be a library for *configuring*
>> logging, but the actual log calls can be handled with Python's standard
>> library, eliminating a dependency between new libraries and oslo.log. (This
>> is a longer, and separate, discussion, but I mention it here as backround.
>> We don't want to change the API of the logger in oslo.log because we don't
>> want to be using it directly in the first place.)
>> 
>> Another concern raised was the use of a prefix _L for these functions,
>> since it ties the priority definitions to "logs." I chose that prefix as an
>> explicit indicate that these *are* just for logs. I am not associating any
>> actual priority with them. The translators want us to move the log messages
>> out of the main catalog. Having them all in separate catalogs is a
>> refinement that gives them what they want -- some translators don't care
>> about log messages at all, some only care about errors, etc. We decided
>> that the translators should set priorities, and we would make that possible
>> by separating the catalogs into logical groups. Everything marked with _()
>> will still go into the main catalog, but beyond that it isn't up to the
>> developers to indicate "priority" for translations.
>> 
>> The alternative approach of using babel translator comments would, under
>> other circumstances, help because each message could have some indication
>> of its relative importance. However, it does not meet the requirement that
>> the translators (and not the developers) set those priorities. It also
>> doesn't help the translators because the main catalog does not shrink to
>> hold only the user-facing messages. So the comments might be useful in
>> addition to this proposed change, but they doesn't solve the original
>> problem.
>> 
>> If we all agree on the approach, I think the patches already in progress
>> should be pretty easy to land in the incubator. The next step is to update
>> the CI jobs that extract the messages and interact with transifex. After
>> that, changes to the applications and existing libraries are likely to take
>> longer, and could be done in batches. They may not happen until the next
>> cycle, but I would like to have the infrastructure in place by the end of
>> this one.
>> 
>> Feedback?
>> 
>> Doug
>> 
>> [1]
>> https://blueprints.launchpad.net/oslo/+spec/log-messages-translation-domain
>> [2] https://review.openstack.org/#/c/65518/
>> 
>> I guess my thoughts are still largely the same as on the original review.
>> This is already going to be an additional burden on developers and
>> reviewers (who love i18n so much already ;-) and ideally I'd prefer that we
>> be a little less granular with our designations.  Something like _IMPORTANT
>> and _OPTIONAL instead of separate translation domains for each individual
>> log level.  Maybe that can't get the translation load down to a manageable
>> level though.  I'm kind of guessing on that point.
>> 
> 
> We did consider something like that at the summit, IIRC. However, we
> wanted to leave the job of setting the priority for doing the translation
> up to the translators, rather than the developers, because the priorities
> vary by language. Using designators that match the log output level lowers
> the review burden, because you don't have to think about the importance of
> translation, only whether or not the translator tag matches the log
> function.
> 
>> 
>> For reference, I grepped the nova source to see how many times we're
>> logging at each of the different levels.  It's a very rough estimate since
>> I'm sure I'm missing some things and there are almost certainly some dupes,
>> but I would expect it to be relatively close to reality.  Here were the
>> results:
>> 
>> [fedora at openstack nova]$ grep -ri log.error | wc -l
>> 190
>> [fedora at openstack nova]$ grep -ri log.warn | wc -l
>> 286
>> [fedora at openstack nova]$ grep -ri log.info | wc -l
>> 254
>> [fedora at openstack nova]$ grep -ri log.debug | wc -l
>> 849
>> 
>> It seems like debug is the low-hanging fruit here - getting rid of that
>> eliminates more translations than the rest of the log levels combined
>> (since it looks like Nova is translating the vast majority of their debug
>> messages).  I don't know if that's helpful (enough) though.
>> 
> I'm not sure either. Daisy, would it solve your team's needs if we just
> removed translation markers from debug log messages and left everything in
> the same catalog? It's not what we talked about at the summit, but maybe
> it's an alternative?
> 
>> 
>> I suppose my biggest concern is getting reviewers to buy in to whatever we
>> do.  It's going to be some additional workload for them since we likely
>> can't enforce this through a hacking rule, and some people basically refuse
>> to touch anything to do with translation as it is.  It's also one more
>> hurdle for new contributors since it's a non-standard way of handling
>> translation.  And, as I noted on the review, it's almost certainly going to
>> get out of sync over time as people adjust log message priorities and
>> such.  Maybe those are all issues we just have to accept, but they are
>> issues.
>> 
> I expect we'll need to set some project-wide standards, as Sean is doing
> with the meanings of the various log levels.
> 
>> 
>> Oh, one other thing I wanted to ask about was what the status of Transifex
>> is as far as OpenStack is concerned.  My understanding was that we were
>> looking for alternatives because Transifex had pretty much abandoned their
>> open source version.  Does that have any impact on this?
>> 
> If we replace it, we will replace it with another tool. The file formats
> are standardized, so I wouldn't expect a tool change at that level to
> affect our decision on this question.
> 
> Doug
> 
>> 
>> Anyway, it's getting late and my driveway won't shovel itself, so those
>> are my slightly rambling thoughts on this. :-)
>> 
>> -Ben
>> 
>> 
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>> 
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140129/039e9ca2/attachment-0001.html>
> 
> ------------------------------
> 
> Message: 10
> Date: Wed, 29 Jan 2014 08:26:20 -0500
> From: Justin Santa Barbara <justin at fathomdb.com>
> To: "OpenStack Development Mailing List (not for usage questions)"
>        <openstack-dev at lists.openstack.org>
> Subject: Re: [openstack-dev] [Nova] bp proposal: discovery of peer
>        instances through metadata service
> Message-ID:
>        <CAFoXKmp98VEJtWh1JLEA1D+KoMdMnErJrmV4cQ2Ka2Pqxujn8Q at mail.gmail.com>
> Content-Type: text/plain; charset=ISO-8859-1
> 
> Certainly my original inclination (and code!) was to agree with you Vish, but:
> 
> 1) It looks like we're going to have writable metadata anyway, for
> communication from the instance to the API.
> 2) I believe the restrictions make it impractical to abuse it as a
> message-bus: size-limits, quotas and write-once make it very poorly
> suited for anything queue like.
> 3) Anything that isn't opt-in will likely have security implications
> which means that it won't get deployed.  This must be deployed to be
> useful.
> 
> In short: I agree that it's not the absolute ideal solution (for me,
> that would be no opt-in), but it feels like the best solution given
> that we must have opt-in, or else e.g. HP won't deploy it.  It uses a
> (soon to be) existing mechanism, and is readily extensible without
> breaking APIs.
> 
> On your idea of scoping by security group, I believe a certain someone
> is looking at supporting hierarchical projects, so we will likely need
> to support more advanced logic here later anyway.  For example:  the
> ability to specify whether an entry should be shared with instances in
> child projects.  This will likely take the form of a sort of selector
> language, so I anticipate we could offer a filter on security groups
> as well if this is useful.  We might well also allow selection by
> instance tags.  The approach allows this, though I would like to keep
> it as simple as possible at first (share with other instances in
> project or don't share)
> 
> Justin
> 
> 
> On Tue, Jan 28, 2014 at 10:39 PM, Vishvananda Ishaya
> <vishvananda at gmail.com> wrote:
>> 
>> On Jan 28, 2014, at 12:17 PM, Justin Santa Barbara <justin at fathomdb.com> wrote:
>> 
>>> Thanks John - combining with the existing effort seems like the right
>>> thing to do (I've reached out to Claxton to coordinate).  Great to see
>>> that the larger issues around quotas / write-once have already been
>>> agreed.
>>> 
>>> So I propose that sharing will work in the same way, but some values
>>> are visible across all instances in the project.  I do not think it
>>> would be appropriate for all entries to be shared this way.  A few
>>> options:
>>> 
>>> 1) A separate endpoint for shared values
>>> 2) Keys are shared iff  e.g. they start with a prefix, like 'peers_XXX'
>>> 3) Keys are set the same way, but a 'shared' parameter can be passed,
>>> either as a query parameter or in the JSON.
>>> 
>>> I like option #3 the best, but feedback is welcome.
>>> 
>>> I think I will have to store the value using a system_metadata entry
>>> per shared key.  I think this avoids issues with concurrent writes,
>>> and also makes it easier to have more advanced sharing policies (e.g.
>>> when we have hierarchical projects)
>>> 
>>> Thank you to everyone for helping me get to what IMHO is a much better
>>> solution than the one I started with!
>>> 
>>> Justin
>> 
>> I am -1 on the post data. I think we should avoid using the metadata service
>> as a cheap queue for communicating across vms and this moves strongly in
>> that direction.
>> 
>> I am +1 on providing a list of ip addresses in the current security group(s)
>> via metadata. I like limiting by security group instead of project because
>> this could prevent the 1000 instance case where people have large shared
>> tenants and it also provides a single tenant a way to have multiple autodiscoverd
>> services. Also the security group info is something that neutron has access
>> to so the neutron proxy should be able to generate the necessary info if
>> neutron is in use.
>> 
>> Just as an interesting side note, we put this vm list in way back in the NASA
>> days as an easy way to get mpi clusters running. In this case we grouped the
>> instances by the key_name used to launch the instance instead of security group.
>> I don't think it occurred to us to use security groups at the time.  Note we
>> also provided the number of cores, but this was for convienience because the
>> mpi implementation didn't support discovering number of cores. Code below.
>> 
>> Vish
>> 
>> $ git show 2cf40bb3
>> commit 2cf40bb3b21d33f4025f80d175a4c2ec7a2f8414
>> Author: Vishvananda Ishaya <vishvananda at yahoo.com>
>> Date:   Thu Jun 24 04:11:54 2010 +0100
>> 
>>    Adding mpi data
>> 
>> diff --git a/nova/endpoint/cloud.py b/nova/endpoint/cloud.py
>> index 8046d42..74da0ee 100644
>> --- a/nova/endpoint/cloud.py
>> +++ b/nova/endpoint/cloud.py
>> @@ -95,8 +95,21 @@ class CloudController(object):
>>     def get_instance_by_ip(self, ip):
>>         return self.instdir.by_ip(ip)
>> 
>> +    def _get_mpi_data(self, project_id):
>> +        result = {}
>> +        for node_name, node in self.instances.iteritems():
>> +            for instance in node.values():
>> +                if instance['project_id'] == project_id:
>> +                    line = '%s slots=%d' % (instance['private_dns_name'], instance.get('vcpus', 0))
>> +                    if instance['key_name'] in result:
>> +                        result[instance['key_name']].append(line)
>> +                    else:
>> +                        result[instance['key_name']] = [line]
>> +        return result
>> +
>>     def get_metadata(self, ip):
>>         i = self.get_instance_by_ip(ip)
>> +        mpi = self._get_mpi_data(i['project_id'])
>>         if i is None:
>>             return None
>>         if i['key_name']:
>> @@ -135,7 +148,8 @@ class CloudController(object):
>>                 'public-keys' : keys,
>>                 'ramdisk-id': i.get('ramdisk_id', ''),
>>                 'reservation-id': i['reservation_id'],
>> -                'security-groups': i.get('groups', '')
>> +                'security-groups': i.get('groups', ''),
>> +                'mpi': mpi
>>             }
>>         }
>>         if False: # TODO: store ancestor ids
>> 
>>> 
>>> 
>>> 
>>> 
>>> On Tue, Jan 28, 2014 at 4:38 AM, John Garbutt <john at johngarbutt.com> wrote:
>>>> On 27 January 2014 14:52, Justin Santa Barbara <justin at fathomdb.com> wrote:
>>>>> Day, Phil wrote:
>>>>> 
>>>>>> 
>>>>>>>> We already have a mechanism now where an instance can push metadata as
>>>>>>>> a way of Windows instances sharing their passwords - so maybe this
>>>>>>>> could
>>>>>>>> build on that somehow - for example each instance pushes the data its
>>>>>>>> willing to share with other instances owned by the same tenant ?
>>>>>>> 
>>>>>>> I do like that and think it would be very cool, but it is much more
>>>>>>> complex to
>>>>>>> implement I think.
>>>>>> 
>>>>>> I don't think its that complicated - just needs one extra attribute stored
>>>>>> per instance (for example into instance_system_metadata) which allows the
>>>>>> instance to be included in the list
>>>>> 
>>>>> 
>>>>> Ah - OK, I think I better understand what you're proposing, and I do like
>>>>> it.  The hardest bit of having the metadata store be full read/write would
>>>>> be defining what is and is not allowed (rate-limits, size-limits, etc).  I
>>>>> worry that you end up with a new key-value store, and with per-instance
>>>>> credentials.  That would be a separate discussion: this blueprint is trying
>>>>> to provide a focused replacement for multicast discovery for the cloud.
>>>>> 
>>>>> But: thank you for reminding me about the Windows password though...  It may
>>>>> provide a reasonable model:
>>>>> 
>>>>> We would have a new endpoint, say 'discovery'.  An instance can POST a
>>>>> single string value to the endpoint.  A GET on the endpoint will return any
>>>>> values posted by all instances in the same project.
>>>>> 
>>>>> One key only; name not publicly exposed ('discovery_datum'?); 255 bytes of
>>>>> value only.
>>>>> 
>>>>> I expect most instances will just post their IPs, but I expect other uses
>>>>> will be found.
>>>>> 
>>>>> If I provided a patch that worked in this way, would you/others be on-board?
>>>> 
>>>> I like that idea. Seems like a good compromise. I have added my review
>>>> comments to the blueprint.
>>>> 
>>>> We have this related blueprints going on, setting metadata on a
>>>> particular server, rather than a group:
>>>> https://blueprints.launchpad.net/nova/+spec/metadata-service-callbacks
>>>> 
>>>> It is limiting things using the existing Quota on metadata updates.
>>>> 
>>>> It would be good to agree a similar format between the two.
>>>> 
>>>> John
>>>> 
>>>> _______________________________________________
>>>> OpenStack-dev mailing list
>>>> OpenStack-dev at lists.openstack.org
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> 
>>> _______________________________________________
>>> OpenStack-dev mailing list
>>> OpenStack-dev at lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>> 
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
> 
> 
> 
> ------------------------------
> 
> Message: 11
> Date: Wed, 29 Jan 2014 08:10:44 -0600
> From: Kyle Mestery <mestery at siliconloons.com>
> To: "OpenStack Development Mailing List (not for usage questions)"
>        <openstack-dev at lists.openstack.org>
> Subject: [openstack-dev] [neutron] [ml2] The impending plethora of ML2
>        MechanismDrivers
> Message-ID: <88CCBB83-06FF-4F73-8421-F6E936E5BB99 at siliconloons.com>
> Content-Type: text/plain; charset=us-ascii
> 
> Folks:
> 
> As you can see from our meeting agent for today [1], we are tracking
> a large number of new ML2 MechanismDrivers at the moment. We plan
> to discuss these in the meeting again this week in the ML2 meeting [2]
> at 1600 UTC in #openstack-meeting-alt. Also, it would be great if each
> MechanismDriver had a representative at these weekly meetings. We
> are currently discussing some changes to port binding in ML2, so this
> may affect your MechanismDriver.
> 
> Thanks, and see you in the weekly ML2 meeting in a few hours!
> Kyle
> 
> [1] https://wiki.openstack.org/wiki/Meetings/ML2
> [2] https://wiki.openstack.org/wiki/Meetings#ML2_Network_sub-team_meeting
> 
> 
> 
> ------------------------------
> 
> Message: 12
> Date: Wed, 29 Jan 2014 09:15:27 -0500
> From: Trevor McKay <tmckay at redhat.com>
> To: "OpenStack Development Mailing List (not for usage questions)"
>        <openstack-dev at lists.openstack.org>
> Subject: Re: [openstack-dev] [savanna] How to handle diverging EDP job
>        configuration settings
> Message-ID: <1391004927.2306.2.camel at tmckaylt.rdu.redhat.com>
> Content-Type: text/plain; charset="UTF-8"
> 
> On Wed, 2014-01-29 at 14:35 +0400, Alexander Ignatov wrote:
>> Thank you for bringing this up, Trevor.
>> 
>> EDP gets more diverse and it's time to change its model.
>> I totally agree with your proposal, but one minor comment.
>> Instead of "savanna." prefix in job_configs wouldn't it be better to make it
>> as "edp."? I think "savanna." is too more wide word for this.
> 
> +1, brilliant. EDP is perfect.  I was worried about the scope of
> "savanna." too.
> 
>> And one more bureaucratic thing... I see you already started implementing it [1],
>> and it is named and goes as new EDP workflow [2]. I think new bluprint should be
>> created for this feature to track all code changes as well as docs updates.
>> Docs I mean public Savanna docs about EDP, rest api docs and samples.
> 
> Absolutely, I can make it new blueprint.  Thanks.
> 
>> [1] https://review.openstack.org/#/c/69712
>> [2] https://blueprints.launchpad.net/openstack/?searchtext=edp-oozie-streaming-mapreduce
>> 
>> Regards,
>> Alexander Ignatov
>> 
>> 
>> 
>> On 28 Jan 2014, at 20:47, Trevor McKay <tmckay at redhat.com> wrote:
>> 
>>> Hello all,
>>> 
>>> In our first pass at EDP, the model for job settings was very consistent
>>> across all of our job types. The execution-time settings fit into this
>>> (superset) structure:
>>> 
>>> job_configs = {'configs': {}, # config settings for oozie and hadoop
>>>           'params': {},  # substitution values for Pig/Hive
>>>           'args': []}    # script args (Pig and Java actions)
>>> 
>>> But we have some things that don't fit (and probably more in the
>>> future):
>>> 
>>> 1) Java jobs have 'main_class' and 'java_opts' settings
>>>  Currently these are handled as additional fields added to the
>>> structure above.  These were the first to diverge.
>>> 
>>> 2) Streaming MapReduce (anticipated) requires mapper and reducer
>>> settings (different than the mapred.xxxx.class settings for
>>> non-streaming MapReduce)
>>> 
>>> Problems caused by adding fields
>>> --------------------------------
>>> The job_configs structure above is stored in the database. Each time we
>>> add a field to the structure above at the level of configs, params, and
>>> args, we force a change to the database tables, a migration script and a
>>> change to the JSON validation for the REST api.
>>> 
>>> We also cause a change for python-savannaclient and potentially other
>>> clients.
>>> 
>>> This kind of change seems bad.
>>> 
>>> Proposal: Borrow a page from Oozie and add "savanna." configs
>>> -------------------------------------------------------------
>>> I would like to fit divergent job settings into the structure we already
>>> have.  One way to do this is to leverage the 'configs' dictionary.  This
>>> dictionary primarily contains settings for hadoop, but there are a
>>> number of "oozie.xxx" settings that are passed to oozie as configs or
>>> set by oozie for the benefit of running apps.
>>> 
>>> What if we allow "savanna." settings to be added to configs?  If we do
>>> that, any and all special configuration settings for specific job types
>>> or subtypes can be handled with no database changes and no api changes.
>>> 
>>> Downside
>>> --------
>>> Currently, all 'configs' are rendered in the generated oozie workflow.
>>> The "savanna." settings would be stripped out and processed by Savanna,
>>> thereby changing that behavior a bit (maybe not a big deal)
>>> 
>>> We would also be mixing "savanna." configs with config_hints for jobs,
>>> so users would potentially see "savanna.xxxx" settings mixed with oozie
>>> and hadoop settings.  Again, maybe not a big deal, but it might blur the
>>> lines a little bit.  Personally, I'm okay with this.
>>> 
>>> Slightly different
>>> ------------------
>>> We could also add a "'savanna-configs': {}" element to job_configs to
>>> keep the configuration spaces separate.
>>> 
>>> But, now we would have 'savanna-configs' (or another name), 'configs',
>>> 'params', and 'args'.  Really? Just how many different types of values
>>> can we come up with? :)
>>> 
>>> I lean away from this approach.
>>> 
>>> Related: breaking up the superset
>>> ---------------------------------
>>> 
>>> It is also the case that not every job type has every value type.
>>> 
>>>            Configs   Params    Args
>>> Hive            Y         Y        N
>>> Pig             Y         Y        Y
>>> MapReduce       Y         N        N
>>> Java            Y         N        Y
>>> 
>>> So do we make that explicit in the docs and enforce it in the api with
>>> errors?
>>> 
>>> Thoughts? I'm sure there are some :)
>>> 
>>> Best,
>>> 
>>> Trevor
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> _______________________________________________
>>> OpenStack-dev mailing list
>>> OpenStack-dev at lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>> 
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> 
> 
> ------------------------------
> 
> Message: 13
> Date: Wed, 29 Jan 2014 09:23:24 -0500
> From: Trevor McKay <tmckay at redhat.com>
> To: "OpenStack Development Mailing List (not for usage questions)"
>        <openstack-dev at lists.openstack.org>
> Subject: Re: [openstack-dev] [savanna] How to handle diverging EDP job
>        configuration settings
> Message-ID: <1391005404.2306.5.camel at tmckaylt.rdu.redhat.com>
> Content-Type: text/plain; charset="UTF-8"
> 
> So, assuming we go forward with this, the followup question is whether
> or not to move "main_class" and "java_opts" for Java actions into
> "edp.java.main_class" and "edp.java.java_opts" configs.
> 
> I think yes.
> 
> Best,
> 
> Trevor
> 
> On Wed, 2014-01-29 at 09:15 -0500, Trevor McKay wrote:
>> On Wed, 2014-01-29 at 14:35 +0400, Alexander Ignatov wrote:
>>> Thank you for bringing this up, Trevor.
>>> 
>>> EDP gets more diverse and it's time to change its model.
>>> I totally agree with your proposal, but one minor comment.
>>> Instead of "savanna." prefix in job_configs wouldn't it be better to make it
>>> as "edp."? I think "savanna." is too more wide word for this.
>> 
>> +1, brilliant. EDP is perfect.  I was worried about the scope of
>> "savanna." too.
>> 
>>> And one more bureaucratic thing... I see you already started implementing it [1],
>>> and it is named and goes as new EDP workflow [2]. I think new bluprint should be
>>> created for this feature to track all code changes as well as docs updates.
>>> Docs I mean public Savanna docs about EDP, rest api docs and samples.
>> 
>> Absolutely, I can make it new blueprint.  Thanks.
>> 
>>> [1] https://review.openstack.org/#/c/69712
>>> [2] https://blueprints.launchpad.net/openstack/?searchtext=edp-oozie-streaming-mapreduce
>>> 
>>> Regards,
>>> Alexander Ignatov
>>> 
>>> 
>>> 
>>> On 28 Jan 2014, at 20:47, Trevor McKay <tmckay at redhat.com> wrote:
>>> 
>>>> Hello all,
>>>> 
>>>> In our first pass at EDP, the model for job settings was very consistent
>>>> across all of our job types. The execution-time settings fit into this
>>>> (superset) structure:
>>>> 
>>>> job_configs = {'configs': {}, # config settings for oozie and hadoop
>>>>         'params': {},  # substitution values for Pig/Hive
>>>>         'args': []}    # script args (Pig and Java actions)
>>>> 
>>>> But we have some things that don't fit (and probably more in the
>>>> future):
>>>> 
>>>> 1) Java jobs have 'main_class' and 'java_opts' settings
>>>>  Currently these are handled as additional fields added to the
>>>> structure above.  These were the first to diverge.
>>>> 
>>>> 2) Streaming MapReduce (anticipated) requires mapper and reducer
>>>> settings (different than the mapred.xxxx.class settings for
>>>> non-streaming MapReduce)
>>>> 
>>>> Problems caused by adding fields
>>>> --------------------------------
>>>> The job_configs structure above is stored in the database. Each time we
>>>> add a field to the structure above at the level of configs, params, and
>>>> args, we force a change to the database tables, a migration script and a
>>>> change to the JSON validation for the REST api.
>>>> 
>>>> We also cause a change for python-savannaclient and potentially other
>>>> clients.
>>>> 
>>>> This kind of change seems bad.
>>>> 
>>>> Proposal: Borrow a page from Oozie and add "savanna." configs
>>>> -------------------------------------------------------------
>>>> I would like to fit divergent job settings into the structure we already
>>>> have.  One way to do this is to leverage the 'configs' dictionary.  This
>>>> dictionary primarily contains settings for hadoop, but there are a
>>>> number of "oozie.xxx" settings that are passed to oozie as configs or
>>>> set by oozie for the benefit of running apps.
>>>> 
>>>> What if we allow "savanna." settings to be added to configs?  If we do
>>>> that, any and all special configuration settings for specific job types
>>>> or subtypes can be handled with no database changes and no api changes.
>>>> 
>>>> Downside
>>>> --------
>>>> Currently, all 'configs' are rendered in the generated oozie workflow.
>>>> The "savanna." settings would be stripped out and processed by Savanna,
>>>> thereby changing that behavior a bit (maybe not a big deal)
>>>> 
>>>> We would also be mixing "savanna." configs with config_hints for jobs,
>>>> so users would potentially see "savanna.xxxx" settings mixed with oozie
>>>> and hadoop settings.  Again, maybe not a big deal, but it might blur the
>>>> lines a little bit.  Personally, I'm okay with this.
>>>> 
>>>> Slightly different
>>>> ------------------
>>>> We could also add a "'savanna-configs': {}" element to job_configs to
>>>> keep the configuration spaces separate.
>>>> 
>>>> But, now we would have 'savanna-configs' (or another name), 'configs',
>>>> 'params', and 'args'.  Really? Just how many different types of values
>>>> can we come up with? :)
>>>> 
>>>> I lean away from this approach.
>>>> 
>>>> Related: breaking up the superset
>>>> ---------------------------------
>>>> 
>>>> It is also the case that not every job type has every value type.
>>>> 
>>>>            Configs   Params    Args
>>>> Hive            Y         Y        N
>>>> Pig             Y         Y        Y
>>>> MapReduce       Y         N        N
>>>> Java            Y         N        Y
>>>> 
>>>> So do we make that explicit in the docs and enforce it in the api with
>>>> errors?
>>>> 
>>>> Thoughts? I'm sure there are some :)
>>>> 
>>>> Best,
>>>> 
>>>> Trevor
>>>> 
>>>> 
>>>> 
>>>> 
>>>> 
>>>> 
>>>> _______________________________________________
>>>> OpenStack-dev mailing list
>>>> OpenStack-dev at lists.openstack.org
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> 
>>> 
>>> _______________________________________________
>>> OpenStack-dev mailing list
>>> OpenStack-dev at lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>> 
>> 
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> 
> 
> ------------------------------
> 
> Message: 14
> Date: Wed, 29 Jan 2014 09:37:11 -0500
> From: Jon Maron <jmaron at hortonworks.com>
> To: "OpenStack Development Mailing List (not for usage questions)"
>        <openstack-dev at lists.openstack.org>
> Subject: Re: [openstack-dev] [savanna] How to handle diverging EDP job
>        configuration settings
> Message-ID: <5D5AD034-00C1-4F96-8EFE-C97E45BF4A01 at hortonworks.com>
> Content-Type: text/plain; charset=windows-1252
> 
> I imagine ?neutron? would follow suit as well..
> 
> On Jan 29, 2014, at 9:23 AM, Trevor McKay <tmckay at redhat.com> wrote:
> 
>> So, assuming we go forward with this, the followup question is whether
>> or not to move "main_class" and "java_opts" for Java actions into
>> "edp.java.main_class" and "edp.java.java_opts" configs.
>> 
>> I think yes.
>> 
>> Best,
>> 
>> Trevor
>> 
>> On Wed, 2014-01-29 at 09:15 -0500, Trevor McKay wrote:
>>> On Wed, 2014-01-29 at 14:35 +0400, Alexander Ignatov wrote:
>>>> Thank you for bringing this up, Trevor.
>>>> 
>>>> EDP gets more diverse and it's time to change its model.
>>>> I totally agree with your proposal, but one minor comment.
>>>> Instead of "savanna." prefix in job_configs wouldn't it be better to make it
>>>> as "edp."? I think "savanna." is too more wide word for this.
>>> 
>>> +1, brilliant. EDP is perfect.  I was worried about the scope of
>>> "savanna." too.
>>> 
>>>> And one more bureaucratic thing... I see you already started implementing it [1],
>>>> and it is named and goes as new EDP workflow [2]. I think new bluprint should be
>>>> created for this feature to track all code changes as well as docs updates.
>>>> Docs I mean public Savanna docs about EDP, rest api docs and samples.
>>> 
>>> Absolutely, I can make it new blueprint.  Thanks.
>>> 
>>>> [1] https://review.openstack.org/#/c/69712
>>>> [2] https://blueprints.launchpad.net/openstack/?searchtext=edp-oozie-streaming-mapreduce
>>>> 
>>>> Regards,
>>>> Alexander Ignatov
>>>> 
>>>> 
>>>> 
>>>> On 28 Jan 2014, at 20:47, Trevor McKay <tmckay at redhat.com> wrote:
>>>> 
>>>>> Hello all,
>>>>> 
>>>>> In our first pass at EDP, the model for job settings was very consistent
>>>>> across all of our job types. The execution-time settings fit into this
>>>>> (superset) structure:
>>>>> 
>>>>> job_configs = {'configs': {}, # config settings for oozie and hadoop
>>>>>          'params': {},  # substitution values for Pig/Hive
>>>>>          'args': []}    # script args (Pig and Java actions)
>>>>> 
>>>>> But we have some things that don't fit (and probably more in the
>>>>> future):
>>>>> 
>>>>> 1) Java jobs have 'main_class' and 'java_opts' settings
>>>>> Currently these are handled as additional fields added to the
>>>>> structure above.  These were the first to diverge.
>>>>> 
>>>>> 2) Streaming MapReduce (anticipated) requires mapper and reducer
>>>>> settings (different than the mapred.xxxx.class settings for
>>>>> non-streaming MapReduce)
>>>>> 
>>>>> Problems caused by adding fields
>>>>> --------------------------------
>>>>> The job_configs structure above is stored in the database. Each time we
>>>>> add a field to the structure above at the level of configs, params, and
>>>>> args, we force a change to the database tables, a migration script and a
>>>>> change to the JSON validation for the REST api.
>>>>> 
>>>>> We also cause a change for python-savannaclient and potentially other
>>>>> clients.
>>>>> 
>>>>> This kind of change seems bad.
>>>>> 
>>>>> Proposal: Borrow a page from Oozie and add "savanna." configs
>>>>> -------------------------------------------------------------
>>>>> I would like to fit divergent job settings into the structure we already
>>>>> have.  One way to do this is to leverage the 'configs' dictionary.  This
>>>>> dictionary primarily contains settings for hadoop, but there are a
>>>>> number of "oozie.xxx" settings that are passed to oozie as configs or
>>>>> set by oozie for the benefit of running apps.
>>>>> 
>>>>> What if we allow "savanna." settings to be added to configs?  If we do
>>>>> that, any and all special configuration settings for specific job types
>>>>> or subtypes can be handled with no database changes and no api changes.
>>>>> 
>>>>> Downside
>>>>> --------
>>>>> Currently, all 'configs' are rendered in the generated oozie workflow.
>>>>> The "savanna." settings would be stripped out and processed by Savanna,
>>>>> thereby changing that behavior a bit (maybe not a big deal)
>>>>> 
>>>>> We would also be mixing "savanna." configs with config_hints for jobs,
>>>>> so users would potentially see "savanna.xxxx" settings mixed with oozie
>>>>> and hadoop settings.  Again, maybe not a big deal, but it might blur the
>>>>> lines a little bit.  Personally, I'm okay with this.
>>>>> 
>>>>> Slightly different
>>>>> ------------------
>>>>> We could also add a "'savanna-configs': {}" element to job_configs to
>>>>> keep the configuration spaces separate.
>>>>> 
>>>>> But, now we would have 'savanna-configs' (or another name), 'configs',
>>>>> 'params', and 'args'.  Really? Just how many different types of values
>>>>> can we come up with? :)
>>>>> 
>>>>> I lean away from this approach.
>>>>> 
>>>>> Related: breaking up the superset
>>>>> ---------------------------------
>>>>> 
>>>>> It is also the case that not every job type has every value type.
>>>>> 
>>>>>           Configs   Params    Args
>>>>> Hive            Y         Y        N
>>>>> Pig             Y         Y        Y
>>>>> MapReduce       Y         N        N
>>>>> Java            Y         N        Y
>>>>> 
>>>>> So do we make that explicit in the docs and enforce it in the api with
>>>>> errors?
>>>>> 
>>>>> Thoughts? I'm sure there are some :)
>>>>> 
>>>>> Best,
>>>>> 
>>>>> Trevor
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> _______________________________________________
>>>>> OpenStack-dev mailing list
>>>>> OpenStack-dev at lists.openstack.org
>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>> 
>>>> 
>>>> _______________________________________________
>>>> OpenStack-dev mailing list
>>>> OpenStack-dev at lists.openstack.org
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> 
>>> 
>>> 
>>> _______________________________________________
>>> OpenStack-dev mailing list
>>> OpenStack-dev at lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>> 
>> 
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> --
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity to
> which it is addressed and may contain information that is confidential,
> privileged and exempt from disclosure under applicable law. If the reader
> of this message is not the intended recipient, you are hereby notified that
> any printing, copying, dissemination, distribution, disclosure or
> forwarding of this communication is strictly prohibited. If you have
> received this communication in error, please contact the sender immediately
> and delete it from your system. Thank You.
> 
> 
> 
> ------------------------------
> 
> Message: 15
> Date: Wed, 29 Jan 2014 14:46:41 +0000
> From: "Robert Li (baoli)" <baoli at cisco.com>
> To: Irena Berezovsky <irenab at mellanox.com>, "rkukura at redhat.com"
>        <rkukura at redhat.com>, "Sandhya Dasu (sadasu)" <sadasu at cisco.com>,
>        "OpenStack Development Mailing List (not for usage questions)"
>        <openstack-dev at lists.openstack.org>
> Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV on
>        Jan. 29th
> Message-ID: <CF0E73B9.3D6DC%baoli at cisco.com>
> Content-Type: text/plain; charset="us-ascii"
> 
> Hi folks,
> 
> I'd like to do a recap on today's meeting, and if possible we should continue the discussion in this thread so that we can be more productive in tomorrow's meeting.
> 
> Bob suggests that we have these BPs:
> One generic covering implementing binding:profile in ML2, and one specific to PCI-passthru, defining the vnic-type (wherever it goes) and any keys for binding:profile.
> 
> 
> Irena suggests that we have three BPs:
> 1. generic ML2 support for binding:profile (corresponding to Bob's covering implementing binding:profile in ML2 ?)
> 2. add vnic_type support for binding Mech Drivers in ML2 plugin
> 3. support PCI slot via profile (corresponding to Bob's any keys for binding:profile ?)
> 
> Both proposals sound similar, so it's great that we are converging. I think that it's important that we put more details in each BP on what's exactly covered by it. One question I have is about where binding:profile will be implemented. I see that portbinding is defined/implemented under its extension and neutron.db. So when both of you guys are saying that implementing binding:profile in ML2, I'm kind of confused. Please let me know what I'm missing here. My understanding is that non-ML2 plugin can use it as well.
> 
> Another issue that came up during the meeting is about whether or not vnic-type should be part of the top level binding or part of binding:profile. In other words, should it be defined as binding:vnic-type or binding:profile:vnic-type.
> 
> We also need one or two BPs to cover the change in the neutron port-create/port-show CLI/API.
> 
> Another thing is that we need to define the binding:profile dictionary.
> 
> Thanks,
> Robert
> 
> 
> 
> On 1/29/14 4:02 AM, "Irena Berezovsky" <irenab at mellanox.com<mailto:irenab at mellanox.com>> wrote:
> 
> Will attend
> 
> From: Robert Li (baoli) [mailto:baoli at cisco.com]
> Sent: Wednesday, January 29, 2014 12:55 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: [openstack-dev] [nova][neutron] PCI pass-through SRIOV on Jan. 29th
> 
> Hi Folks,
> 
> Can we have one more meeting tomorrow? I'd like to discuss the blueprints we are going to have and what each BP will be covering.
> 
> thanks,
> Robert
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140129/0dc60ef4/attachment-0001.html>
> 
> ------------------------------
> 
> Message: 16
> Date: Wed, 29 Jan 2014 14:48:18 +0000
> From: Vinod Kumar Boppanna <vinod.kumar.boppanna at cern.ch>
> To: "openstack-dev at lists.openstack.org"
>        <openstack-dev at lists.openstack.org>
> Subject: [openstack-dev] Nova V2 Quota API
> Message-ID:
>        <9060BFC90E7F6A41B84D1CF0E33689C401000429C1 at PLOXCHG24.cern.ch>
> Content-Type: text/plain; charset="iso-8859-1"
> 
> Hi,
> 
> In the Documentation, it was mentioned that there are two API's to see the quotas of a tenant.
> 
> 1. v2/{tenant_id}/os-quota-sets - Shows quotas for a tenant
> 
> 2. v2/{tenant_id}/os-quota-sets/{tenant_id}/{user_id} - Enables an admin to show quotas for a specified tenant and a user
> 
> I guess the first API can be used by a member in a tenant to get the quotas of that tenant. The second one can be run by admin to get the quotas of any tenant or any user.
> 
> But through normal user when i am running any of the below (after authentication)
> 
> $> nova --debug quota-show --tenant <tenant_id>    (tenant id of a project in which this user is member)
> It is calling the second API i.e  v2/{tenant_id}/os-quota-sets/{tenant_id}
> 
> or even when i am calling directly the API
> 
> $>  curl -i -H"X-Auth-Token:$TOKEN" -H "Content-type: application/json" http://localhost:8774/v2/<tenant_id>/os-quota-sets/<http://localhost:8774/v2/2665b63d29a1493990ab1c5412fc838d/os-quota-sets/>
> It says the "Resource not found".
> 
> So, Is the first API is available?
> 
> Regards,
> Vinod Kumar Boppanna
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140129/5161fb7c/attachment-0001.html>
> 
> ------------------------------
> 
> Message: 17
> Date: Wed, 29 Jan 2014 12:56:47 -0200
> From: Telles Nobrega <tellesnobrega at gmail.com>
> To: "OpenStack Development Mailing List (not for usage questions)"
>        <openstack-dev at lists.openstack.org>
> Subject: Re: [openstack-dev] Hierarchicical Multitenancy Discussion
> Message-ID:
>        <CADbqdAzHoYdVyp-mBeEJCAtApHvhES3Eu1-uKojhYEbqAmnB=g at mail.gmail.com>
> Content-Type: text/plain; charset="iso-8859-1"
> 
> Hi,
> 
> I'm also working with multitenancy and I would like to join this working
> group.
> 
> Telles N?brega
> 
> 
> On Wed, Jan 29, 2014 at 9:14 AM, Ulrich Schwickerath <
> ulrich.schwickerath at cern.ch> wrote:
> 
>> Hi,
>> 
>> I'm working with Vinod. We'd like to join as well. Same issue on our side:
>> 16:00 UTC is better for us.
>> 
>> Ulrich and Vinod
>> 
>> 
>> On 29.01.2014 10:56, Florent Flament wrote:
>> 
>>> Hi Vishvananda,
>>> 
>>> I would be interested in such a working group.
>>> Can you please confirm the meeting hour for this Friday ?
>>> I've seen 1600 UTC in your email and 2100 UTC in the wiki (
>>> https://wiki.openstack.org/wiki/Meetings#Hierarchical_
>>> Multitenancy_Meeting ). As I'm in Europe I'd prefer 1600 UTC.
>>> 
>>> Florent Flament
>>> 
>>> ----- Original Message -----
>>> From: "Vishvananda Ishaya" <vishvananda at gmail.com>
>>> To: "OpenStack Development Mailing List (not for usage questions)" <
>>> openstack-dev at lists.openstack.org>
>>> Sent: Tuesday, January 28, 2014 7:35:15 PM
>>> Subject: [openstack-dev] Hierarchicical Multitenancy Discussion
>>> 
>>> Hi Everyone,
>>> 
>>> I apologize for the obtuse title, but there isn't a better succinct term
>>> to describe what is needed. OpenStack has no support for multiple owners of
>>> objects. This means that a variety of private cloud use cases are simply
>>> not supported. Specifically, objects in the system can only be managed on
>>> the tenant level or globally.
>>> 
>>> The key use case here is to delegate administration rights for a group of
>>> tenants to a specific user/role. There is something in Keystone called a
>>> "domain" which supports part of this functionality, but without support
>>> from all of the projects, this concept is pretty useless.
>>> 
>>> In IRC today I had a brief discussion about how we could address this. I
>>> have put some details and a straw man up here:
>>> 
>>> https://wiki.openstack.org/wiki/HierarchicalMultitenancy
>>> 
>>> I would like to discuss this strawman and organize a group of people to
>>> get actual work done by having an irc meeting this Friday at 1600UTC. I
>>> know this time is probably a bit tough for Europe, so if we decide we need
>>> a regular meeting to discuss progress then we can vote on a better time for
>>> this meeting.
>>> 
>>> https://wiki.openstack.org/wiki/Meetings#Hierarchical_
>>> Multitenancy_Meeting
>>> 
>>> Please note that this is going to be an active team that produces code.
>>> We will *NOT* spend a lot of time debating approaches, and instead focus on
>>> making something that works and learning as we go. The output of this team
>>> will be a MultiTenant devstack install that actually works, so that we can
>>> ensure the features we are adding to each project work together.
>>> 
>>> Vish
>>> 
>>> _______________________________________________
>>> OpenStack-dev mailing list
>>> OpenStack-dev at lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> 
>>> _______________________________________________
>>> OpenStack-dev mailing list
>>> OpenStack-dev at lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> 
>> 
>> 
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
> 
> 
> 
> --
> ------------------------------------------
> Telles Mota Vidal Nobrega
> Bsc in Computer Science at UFCG
> Developer at PulsarOpenStack Project - HP/LSD-UFCG
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140129/547995c6/attachment-0001.html>
> 
> ------------------------------
> 
> Message: 18
> Date: Wed, 29 Jan 2014 23:22:16 +0800
> From: Yingjun Li <liyingjun1988 at gmail.com>
> To: "OpenStack Development Mailing List (not for usage questions)"
>        <openstack-dev at lists.openstack.org>
> Subject: Re: [openstack-dev] Nova V2 Quota API
> Message-ID: <23823D96-39C9-4B3E-A84B-FEAF5FABB395 at gmail.com>
> Content-Type: text/plain; charset="iso-8859-1"
> 
> 
> On Jan 29, 2014, at 22:48, Vinod Kumar Boppanna <vinod.kumar.boppanna at cern.ch> wrote:
> 
>> Hi,
>> 
>> In the Documentation, it was mentioned that there are two API's to see the quotas of a tenant.
>> 
>> 1. v2/{tenant_id}/os-quota-sets - Shows quotas for a tenant
>> 
>> 2. v2/{tenant_id}/os-quota-sets/{tenant_id}/{user_id} - Enables an admin to show quotas for a specified tenant and a user
>> 
>> I guess the first API can be used by a member in a tenant to get the quotas of that tenant. The second one can be run by admin to get the quotas of any tenant or any user.
>> 
>> But through normal user when i am running any of the below (after authentication)
>> 
>> $> nova --debug quota-show --tenant <tenant_id>    (tenant id of a project in which this user is member)
>> It is calling the second API i.e  v2/{tenant_id}/os-quota-sets/{tenant_id}
>> 
>> or even when i am calling directly the API
>> 
>> $>  curl -i -H"X-Auth-Token:$TOKEN" -H "Content-type: application/json" http://localhost:8774/v2/<tenant_id>/os-quota-sets/
> 
> I think the documentation is missing <tenant_id> behind os-quota-sets/
> It should be like curl -i -H"X-Auth-Token:$TOKEN" -H "Content-type: application/json"http://localhost:8774/v2/<tenant_id>/os-quota-sets/<tenant_id>
> 
>> It says the "Resource not found".
>> 
>> So, Is the first API is available?
>> 
>> Regards,
>> Vinod Kumar Boppanna
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140129/6118eec5/attachment-0001.html>
> 
> ------------------------------
> 
> Message: 19
> Date: Wed, 29 Jan 2014 10:22:35 -0500
> From: Gordon Chung <chungg at ca.ibm.com>
> To: "OpenStack Development Mailing List \(not for usage questions\)"
>        <openstack-dev at lists.openstack.org>
> Subject: Re: [openstack-dev]
>        [Ironic][Ceilometer]bp:send-data-to-ceilometer
> Message-ID:
>        <OF84721C3C.19FB186C-ON85257C6F.0053B700-85257C6F.0054786E at ca.ibm.com>
> Content-Type: text/plain; charset="us-ascii"
> 
>>    Meter Names:
>>        fanspeed, fanspeed.min, fanspeed.max, fanspeed.status
>>        voltage, voltage.min, voltage.max, voltage.status
>>        temperature, temperature.min, temperature.max,
> temperature.status
>> 
>>                'FAN 1': {
>>                    'current_value': '4652',
>>                    'min_value': '4200',
>>                    'max_value': '4693',
>>                    'status': 'ok'
>>                }
>>                'FAN 2': {
>>                    'current_value': '4322',
>>                    'min_value': '4210',
>>                    'max_value': '4593',
>>                    'status': 'ok'
>>            },
>>            'voltage': {
>>                'Vcore': {
>>                    'current_value': '0.81',
>>                    'min_value': '0.80',
>>                    'max_value': '0.85',
>>                    'status': 'ok'
>>                },
>>                '3.3VCC': {
>>                    'current_value': '3.36',
>>                    'min_value': '3.20',
>>                    'max_value': '3.56',
>>                    'status': 'ok'
>>                },
>>            ...
>>        }
>>    }
> 
> are FAN 1, FAN 2, Vcore, etc... variable names or values that would
> consistently show up? if the former, would it make sense to have the
> meters be similar to fanspeed:<trait> where trait is FAN1, FAN2, etc...?
> if the meter is just fanspeed, what would the volume be? FAN 1's
> current_value?
> 
> cheers,
> 
> gordon chung
> openstack, ibm software standards
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140129/e03050fc/attachment-0001.html>
> 
> ------------------------------
> 
> Message: 20
> Date: Wed, 29 Jan 2014 15:19:57 +0000
> From: Irena Berezovsky <irenab at mellanox.com>
> To: "Robert Li (baoli)" <baoli at cisco.com>, "rkukura at redhat.com"
>        <rkukura at redhat.com>, "Sandhya Dasu (sadasu)" <sadasu at cisco.com>,
>        "OpenStack Development Mailing List (not for usage questions)"
>        <openstack-dev at lists.openstack.org>
> Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV on
>        Jan. 29th
> Message-ID:
>        <9D25E123B44F4A4291F4B5C13DA94E7788300EBD at MTLDAG02.mtl.com>
> Content-Type: text/plain; charset="us-ascii"
> 
> Hi Robert,
> I think that I can go with Bob's suggestion, but think it makes sense to cover the vnic_type and PCI-passthru via two separate patches. Adding vnic_type will probably impose changes to existing Mech. Drivers while PCI-passthru is about introducing some pieces for new SRIOV supporting Mech. Drivers.
> 
> More comments inline
> 
> BR,
> IRena
> 
> From: Robert Li (baoli) [mailto:baoli at cisco.com]
> Sent: Wednesday, January 29, 2014 4:47 PM
> To: Irena Berezovsky; rkukura at redhat.com; Sandhya Dasu (sadasu); OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV on Jan. 29th
> 
> Hi folks,
> 
> I'd like to do a recap on today's meeting, and if possible we should continue the discussion in this thread so that we can be more productive in tomorrow's meeting.
> 
> Bob suggests that we have these BPs:
> One generic covering implementing binding:profile in ML2, and one specific to PCI-passthru, defining the vnic-type (wherever it goes) and any keys for binding:profile.
> 
> 
> Irena suggests that we have three BPs:
> 1. generic ML2 support for binding:profile (corresponding to Bob's covering implementing binding:profile in ML2 ?)
> 2. add vnic_type support for binding Mech Drivers in ML2 plugin
> 3. support PCI slot via profile (corresponding to Bob's any keys for binding:profile ?)
> 
> Both proposals sound similar, so it's great that we are converging. I think that it's important that we put more details in each BP on what's exactly covered by it. One question I have is about where binding:profile will be implemented. I see that portbinding is defined/implemented under its extension and neutron.db. So when both of you guys are saying that implementing binding:profile in ML2, I'm kind of confused. Please let me know what I'm missing here. My understanding is that non-ML2 plugin can use it as well.
> [IrenaB] Basically you  are right. Currently ML2 does not inherit the DB Mixin for port binding. So it supports the port binding extension, but uses its own DB table to store relevant attributes. Making it work for ML2 means not adding this support to PortBindingMixin.
> 
> Another issue that came up during the meeting is about whether or not vnic-type should be part of the top level binding or part of binding:profile. In other words, should it be defined as binding:vnic-type or binding:profile:vnic-type.
> [IrenaB] As long as existing binding capable Mech Drivers will take vnic_type into its consideration, I guess doing it via binding:profile will introduce less changes all over (CLI, API). But I am not sure this reason is strong enough to choose this direction
> We also need one or two BPs to cover the change in the neutron port-create/port-show CLI/API.
> [IrenaB] binding:profile is already supported, so it probably depends on direction with vnic_type
> 
> Another thing is that we need to define the binding:profile dictionary.
> [IrenaB] With regards to PCI SRIOV related attributes, right?
> 
> Thanks,
> Robert
> 
> 
> 
> On 1/29/14 4:02 AM, "Irena Berezovsky" <irenab at mellanox.com<mailto:irenab at mellanox.com>> wrote:
> 
> Will attend
> 
> From: Robert Li (baoli) [mailto:baoli at cisco.com]
> Sent: Wednesday, January 29, 2014 12:55 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: [openstack-dev] [nova][neutron] PCI pass-through SRIOV on Jan. 29th
> 
> Hi Folks,
> 
> Can we have one more meeting tomorrow? I'd like to discuss the blueprints we are going to have and what each BP will be covering.
> 
> thanks,
> Robert
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140129/fcefbb3c/attachment-0001.html>
> 
> ------------------------------
> 
> Message: 21
> Date: Wed, 29 Jan 2014 09:33:49 -0600
> From: Anne Gentle <anne at openstack.org>
> To: "OpenStack Development Mailing List (not for usage questions)"
>        <openstack-dev at lists.openstack.org>
> Subject: Re: [openstack-dev] Nova V2 Quota API
> Message-ID:
>        <CAD0KtVGm3dZZ=Lziogrn_BLc29GX5kZwDPRTx-QsZYt4zt51pg at mail.gmail.com>
> Content-Type: text/plain; charset="iso-8859-1"
> 
> Hi can you point out where you're seeing documentation for the first
> without tenant_id?
> 
> At http://api.openstack.org/api-ref-compute-ext.html#ext-os-quota-sets only
> the tenant_id is documented.
> 
> This is documented identically at
> http://docs.openstack.org/api/openstack-compute/2/content/ext-os-quota-sets.html
> 
> Let us know where you're seeing the misleading documentation so we can log
> a bug and fix it.
> Anne
> 
> 
> On Wed, Jan 29, 2014 at 8:48 AM, Vinod Kumar Boppanna <
> vinod.kumar.boppanna at cern.ch> wrote:
> 
>> Hi,
>> 
>> In the Documentation, it was mentioned that there are two API's to see the
>> quotas of a tenant.
>> 
>> 1. v2/{tenant_id}/os-quota-sets - Shows quotas for a tenant
>> 
>> 2. v2/{tenant_id}/os-quota-sets/{tenant_id}/{user_id} - Enables an admin
>> to show quotas for a specified tenant and a user
>> 
>> I guess the first API can be used by a member in a tenant to get the
>> quotas of that tenant. The second one can be run by admin to get the quotas
>> of any tenant or any user.
>> 
>> But through normal user when i am running any of the below (after
>> authentication)
>> 
>> $> nova --debug quota-show --tenant <tenant_id>    (tenant id of a project
>> in which this user is member)
>> It is calling the second API i.e  v2/{tenant_id}/os-quota-sets/{tenant_id}
>> 
>> or even when i am calling directly the API
>> 
>> $>  curl -i -H"X-Auth-Token:$TOKEN" -H "Content-type: application/json"
>> http://localhost:8774/v2/<tenant_id>/os-quota-sets/<http://localhost:8774/v2/2665b63d29a1493990ab1c5412fc838d/os-quota-sets/>
>> It says the "Resource not found".
>> 
>> So, Is the first API is available?
>> 
>> Regards,
>> Vinod Kumar Boppanna
>> 
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>> 
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140129/5bcd37a2/attachment-0001.html>
> 
> ------------------------------
> 
> Message: 22
> Date: Wed, 29 Jan 2014 07:42:12 -0800
> From: Vishvananda Ishaya <vishvananda at gmail.com>
> To: "OpenStack Development Mailing List (not for usage questions)"
>        <openstack-dev at lists.openstack.org>
> Cc: "project-cloudman \(Cloudman-high level cloud management tool
>        project\)" <project-cloudman at cern.ch>
> Subject: Re: [openstack-dev] Havana Release V3 Extensions and new
>        features        to quota
> Message-ID: <7D29DF80-84EB-4CD5-8503-062FCCF2AC1E at gmail.com>
> Content-Type: text/plain; charset="windows-1252"
> 
> 
> On Jan 29, 2014, at 3:55 AM, Vinod Kumar Boppanna <vinod.kumar.boppanna at cern.ch> wrote:
> 
>> Dear Vishvananda,
>> 
>> Sorry for very late reply. I was stupid not to follow your reply (i had messed it some how).
>> 
>> Actually, i am confused after seeing your mail. In the last two weeks, i was doing some testing (creating use cases) on Keystone and Nova.
>> 
>> Part 1:  Delegating rights
>> 
>> I had made the following observations using Keystone V3
>> 
>> 1. RBAC were not working in Keystone V2 (it was only working in V3)
>> 2. In V3, I could create a role (like 'listRole') and created a user in a tenant with this role
>> 3. I had changed the RBAC rules in policy.json file of keystone to allowed a user with the 'listRole' in addition to admin, to run the "list_domains", "list_projects" and "list_users" operations
>>   (earlier this operations can only be run by admin or we can say super-user)
>> 4. These settings were successful and working perfectly fine.
>> 
>> What my point is here, by playing with RBAC with V3 APIs of keystone, i could delegate rights to users.
>> 
>> So, i thought the same can be achieved in any other service (like nova).
>> For example, i thought in nova also i can create a role add change the policy.json file to allow him to do the necessary operations like list, update etc..
>> 
>> I could not do this check, because i couldn't able to run Nova with V3 successfully and also could not find the Nova V3 APIs.
>> 
>> But one thing i guess is missing here (even in keystone) is that, if we allow a normal user with a role to do certain operations, then he/she can do those operations in another domain or another project, in which he does not belong to.
>> So, i guess this can checked in the code. Lets use RBAC rules to check whether a person can run that query or not. Once RBAC says it is allowed, we can check whether an admin/super-user is running or a normal user is running that query.
>> If the user is admin, he can request for anything. If the user is a normal user, then we can check whether he is asking only for his domain or his project. If so, then only allow otherwise raise an error.
> 
> This idea is great in principle, but ?asking only for his domain or his project doesn?t make any sense in this case?. In nova objects are explicitly owned by a project. There is no way to determine of an object is part of a domain, so roles in that sense are non-functional. This is true across projects and is something tht needs to be addressed.
> 
>> 
>> Part 2: Quotas
>> 
>> I would also like to discuss with you about quotas.
>> 
>> As you know, the current quota system is de-centralized and the driver available in nova is "DbQuotaDriver", which allows to set quotas for a tenant and users in the tenant.
>> I could manage the quota driver to point to a new driver called "DomainQuotaDriver" (from Tiago Martins and team from HP) in nova code. I had built a test case in which i checked that a tenant quota cannot be greater than the domain quota in which the tenant is registered.Even, the sum of all tenant quotas cannot exceed their domain quota. In this, what is missing is the API's to operate the quotas for domains. I thought of creating these API's in V2 (as i could not find V3 APIs in nova). So, a new level called domain will be added to existing quota APIs. For example, the current API "v2/{tenant_id}/os-quota-sets" allows to see the quotas for a tenant. Probably, this can be changed to "v2/{domain_id}/{tenant_id}/os-quota-sets" to see the quotas for a tenant in a domain.
> 
> Again this makes sense in principle. We do have the domain in the request context from keystone. Unfortunately, once again there is no mapping of domain to object so there is no way to count the existing objects to determine how much has already been used.
> 
> If you can make the Hierarchical Ownership meeting tomorrow we will discuss adressing these and other issues so that we can at the very least have a prototype solution.
> 
> Vish
>> 
>> I am currently trying to understand the nova-api code to see how and API mapping is done (through routes) and how an API calling is actually leading to a python function being called. Once i complete this, i am thinking of about these API's. Ideally implementation the extension of domain quotas in V3 APIs would have been good. But as i said i could not find any documentation about the Nova V3 APIs
>> 
>> 
>> I feel once we have Part 1 and Part 2, then quota delegation is not a big task. Because with RBAC rules, we can allow a user lets say with "tenant admin" role, can set the quotas for all the users in that tenant.
>> 
>> 
>> Please post your comments on this. Here at CERN we want to contribute to the quota management (earlier thought of centralized quota, but currently going with de-centralized quota with openstack services keeping the quota data).
>> I will wait for your comments to guide us or tell us how we can contribute..
>> 
>> Thanks & Regards,
>> Vinod Kumar Boppanna
>> 
>> 
>> 
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140129/6ee5ad86/attachment-0001.html>
> -------------- next part --------------
> A non-text attachment was scrubbed...
> Name: signature.asc
> Type: application/pgp-signature
> Size: 455 bytes
> Desc: Message signed with OpenPGP using GPGMail
> URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140129/6ee5ad86/attachment-0001.pgp>
> 
> ------------------------------
> 
> Message: 23
> Date: Wed, 29 Jan 2014 15:43:55 +0000
> From: "Robert Li (baoli)" <baoli at cisco.com>
> To: "OpenStack Development Mailing List (not for usage questions)"
>        <openstack-dev at lists.openstack.org>, "Jiang, Yunhong"
>        <yunhong.jiang at intel.com>, yongli he <yongli.he at intel.com>, "Ian Wells
>        (iawells)" <iawells at cisco.com>, "irenab at mellanox.com"
>        <irenab at mellanox.com>
> Subject: Re: [openstack-dev] [nova] [neutron] PCI pass-through network
>        support
> Message-ID: <CF0E85F0.3D78A%baoli at cisco.com>
> Content-Type: text/plain; charset="Windows-1252"
> 
> Hi Yongli,
> 
> Thank you for addressing my comments, and for adding the encryption card
> use case. One thing that I want to point out is that in this use case, you
> may not use the pci-flavor in the --nic option because it's not a neutron
> feature.
> 
> I have a few more questions:
> 1. pci-flavor-attrs is configured through configuration files and will be
> available on both the controller node and the compute nodes. Can the cloud
> admin decide to add a new attribute in a running cloud? If that's
> possible, how is that done?
> 2. PCI flavor will be defined using the attributes in pci-flavor-attrs. A
> flavor is defined with a matching expression in the form of attr1 = val11
> [| val12 ?.], [attr2 = val21 [| val22 ?]], ?. And this expression is used
> to match one or more PCI stats groups until a free PCI device is located.
> In this case, both attr1 and attr2 can have multiple values, and both
> attributes need to be satisfied. Please confirm this understanding is
> correct
> 3. I'd like to see an example that involves multiple attributes. let's say
> pci-flavor-attrs = {gpu, net-group, device_id, product_id}. I'd like to
> know how PCI stats groups are formed on compute nodes based on that, and
> how many of PCI stats groups are there? What's the reasonable guidelines
> in defining the PCI flavors.
> 
> 
> thanks,
> Robert
> 
> 
> 
> On 1/28/14 10:16 PM, "Robert Li (baoli)" <baoli at cisco.com> wrote:
> 
>> Hi,
>> 
>> I added a few comments in this wiki that Yongli came up with:
>> https://wiki.openstack.org/wiki/PCI_passthrough_SRIOV_support
>> 
>> Please check it out and look for Robert in the wiki.
>> 
>> Thanks,
>> Robert
>> 
>> On 1/21/14 9:55 AM, "Robert Li (baoli)" <baoli at cisco.com> wrote:
>> 
>>> Yunhong,
>>> 
>>> Just try to understand your use case:
>>>   -- a VM can only work with cards from vendor V1
>>>   -- a VM can work with cards from both vendor V1 and V2
>>> 
>>>     So stats in the two flavors will overlap in the PCI flavor
>>> solution.
>>> I'm just trying to say that this is something that needs to be properly
>>> addressed.
>>> 
>>> 
>>> Just for the sake of discussion, another solution to meeting the above
>>> requirement is able to say in the nova flavor's extra-spec
>>> 
>>>          encrypt_card = card from vendor V1 OR encrypt_card = card from
>>> vendor V2
>>> 
>>> 
>>> In other words, this can be solved in the nova flavor, rather than
>>> introducing a new flavor.
>>> 
>>> Thanks,
>>> Robert
>>> 
>>> 
>>> On 1/17/14 7:03 PM, "yunhong jiang" <yunhong.jiang at linux.intel.com>
>>> wrote:
>>> 
>>>> On Fri, 2014-01-17 at 22:30 +0000, Robert Li (baoli) wrote:
>>>>> Yunhong,
>>>>> 
>>>>> I'm hoping that these comments can be directly addressed:
>>>>>      a practical deployment scenario that requires arbitrary
>>>>> attributes.
>>>> 
>>>> I'm just strongly against to support only one attributes (your PCI
>>>> group) for scheduling and management, that's really TOO limited.
>>>> 
>>>> A simple scenario is, I have 3 encryption card:
>>>>    Card 1 (vendor_id is V1, device_id =0xa)
>>>>    card 2(vendor_id is V1, device_id=0xb)
>>>>    card 3(vendor_id is V2, device_id=0xb)
>>>> 
>>>>    I have two images. One image only support Card 1 and another image
>>>> support Card 1/3 (or any other combination of the 3 card type). I don't
>>>> only one attributes will meet such requirement.
>>>> 
>>>> As to arbitrary attributes or limited list of attributes, my opinion is,
>>>> as there are so many type of PCI devices and so many potential of PCI
>>>> devices usage, support arbitrary attributes will make our effort more
>>>> flexible, if we can push the implementation into the tree.
>>>> 
>>>>>      detailed design on the following (that also take into account
>>>>> the
>>>>> introduction of predefined attributes):
>>>>>        * PCI stats report since the scheduler is stats based
>>>> 
>>>> I don't think there are much difference with current implementation.
>>>> 
>>>>>        * the scheduler in support of PCI flavors with arbitrary
>>>>> attributes and potential overlapping.
>>>> 
>>>> As Ian said, we need make sure the pci_stats and the PCI flavor have the
>>>> same set of attributes, so I don't think there are much difference with
>>>> current implementation.
>>>> 
>>>>>      networking requirements to support multiple provider
>>>>> nets/physical
>>>>> nets
>>>> 
>>>> Can't the extra info resolve this issue? Can you elaborate the issue?
>>>> 
>>>> Thanks
>>>> --jyh
>>>>> 
>>>>> I guess that the above will become clear as the discussion goes on.
>>>>> And we
>>>>> also need to define the deliveries
>>>>> 
>>>>> Thanks,
>>>>> Robert
>>>> 
>>>> 
>>>> _______________________________________________
>>>> OpenStack-dev mailing list
>>>> OpenStack-dev at lists.openstack.org
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> 
>> 
> 
> 
> 
> 
> ------------------------------
> 
> Message: 24
> Date: Wed, 29 Jan 2014 19:44:16 +0400
> From: Sergey Lukjanov <slukjanov at mirantis.com>
> To: "OpenStack Development Mailing List (not for usage questions)"
>        <openstack-dev at lists.openstack.org>
> Subject: Re: [openstack-dev] [savanna] How to handle diverging EDP job
>        configuration settings
> Message-ID:
>        <CA+GZd7_HTfgrj8dKAaojk55ZitjJQe=jobx+E=t2bH=6ViM+Bg at mail.gmail.com>
> Content-Type: text/plain; charset="iso-8859-1"
> 
> Trevor,
> 
> it sounds reasonable to move main_class and java_opts to edp.java.
> 
> Jon,
> 
> does you mean neutron-related info for namespaces support? If yes than
> neutron isn't the user-side config.
> 
> Thanks.
> 
> 
> On Wed, Jan 29, 2014 at 6:37 PM, Jon Maron <jmaron at hortonworks.com> wrote:
> 
>> I imagine 'neutron' would follow suit as well..
>> 
>> On Jan 29, 2014, at 9:23 AM, Trevor McKay <tmckay at redhat.com> wrote:
>> 
>>> So, assuming we go forward with this, the followup question is whether
>>> or not to move "main_class" and "java_opts" for Java actions into
>>> "edp.java.main_class" and "edp.java.java_opts" configs.
>>> 
>>> I think yes.
>>> 
>>> Best,
>>> 
>>> Trevor
>>> 
>>> On Wed, 2014-01-29 at 09:15 -0500, Trevor McKay wrote:
>>>> On Wed, 2014-01-29 at 14:35 +0400, Alexander Ignatov wrote:
>>>>> Thank you for bringing this up, Trevor.
>>>>> 
>>>>> EDP gets more diverse and it's time to change its model.
>>>>> I totally agree with your proposal, but one minor comment.
>>>>> Instead of "savanna." prefix in job_configs wouldn't it be better to
>> make it
>>>>> as "edp."? I think "savanna." is too more wide word for this.
>>>> 
>>>> +1, brilliant. EDP is perfect.  I was worried about the scope of
>>>> "savanna." too.
>>>> 
>>>>> And one more bureaucratic thing... I see you already started
>> implementing it [1],
>>>>> and it is named and goes as new EDP workflow [2]. I think new bluprint
>> should be
>>>>> created for this feature to track all code changes as well as docs
>> updates.
>>>>> Docs I mean public Savanna docs about EDP, rest api docs and samples.
>>>> 
>>>> Absolutely, I can make it new blueprint.  Thanks.
>>>> 
>>>>> [1] https://review.openstack.org/#/c/69712
>>>>> [2]
>> https://blueprints.launchpad.net/openstack/?searchtext=edp-oozie-streaming-mapreduce
>>>>> 
>>>>> Regards,
>>>>> Alexander Ignatov
>>>>> 
>>>>> 
>>>>> 
>>>>> On 28 Jan 2014, at 20:47, Trevor McKay <tmckay at redhat.com> wrote:
>>>>> 
>>>>>> Hello all,
>>>>>> 
>>>>>> In our first pass at EDP, the model for job settings was very
>> consistent
>>>>>> across all of our job types. The execution-time settings fit into this
>>>>>> (superset) structure:
>>>>>> 
>>>>>> job_configs = {'configs': {}, # config settings for oozie and hadoop
>>>>>>          'params': {},  # substitution values for Pig/Hive
>>>>>>          'args': []}    # script args (Pig and Java actions)
>>>>>> 
>>>>>> But we have some things that don't fit (and probably more in the
>>>>>> future):
>>>>>> 
>>>>>> 1) Java jobs have 'main_class' and 'java_opts' settings
>>>>>> Currently these are handled as additional fields added to the
>>>>>> structure above.  These were the first to diverge.
>>>>>> 
>>>>>> 2) Streaming MapReduce (anticipated) requires mapper and reducer
>>>>>> settings (different than the mapred.xxxx.class settings for
>>>>>> non-streaming MapReduce)
>>>>>> 
>>>>>> Problems caused by adding fields
>>>>>> --------------------------------
>>>>>> The job_configs structure above is stored in the database. Each time
>> we
>>>>>> add a field to the structure above at the level of configs, params,
>> and
>>>>>> args, we force a change to the database tables, a migration script
>> and a
>>>>>> change to the JSON validation for the REST api.
>>>>>> 
>>>>>> We also cause a change for python-savannaclient and potentially other
>>>>>> clients.
>>>>>> 
>>>>>> This kind of change seems bad.
>>>>>> 
>>>>>> Proposal: Borrow a page from Oozie and add "savanna." configs
>>>>>> -------------------------------------------------------------
>>>>>> I would like to fit divergent job settings into the structure we
>> already
>>>>>> have.  One way to do this is to leverage the 'configs' dictionary.
>> This
>>>>>> dictionary primarily contains settings for hadoop, but there are a
>>>>>> number of "oozie.xxx" settings that are passed to oozie as configs or
>>>>>> set by oozie for the benefit of running apps.
>>>>>> 
>>>>>> What if we allow "savanna." settings to be added to configs?  If we do
>>>>>> that, any and all special configuration settings for specific job
>> types
>>>>>> or subtypes can be handled with no database changes and no api
>> changes.
>>>>>> 
>>>>>> Downside
>>>>>> --------
>>>>>> Currently, all 'configs' are rendered in the generated oozie workflow.
>>>>>> The "savanna." settings would be stripped out and processed by
>> Savanna,
>>>>>> thereby changing that behavior a bit (maybe not a big deal)
>>>>>> 
>>>>>> We would also be mixing "savanna." configs with config_hints for jobs,
>>>>>> so users would potentially see "savanna.xxxx" settings mixed with
>> oozie
>>>>>> and hadoop settings.  Again, maybe not a big deal, but it might blur
>> the
>>>>>> lines a little bit.  Personally, I'm okay with this.
>>>>>> 
>>>>>> Slightly different
>>>>>> ------------------
>>>>>> We could also add a "'savanna-configs': {}" element to job_configs to
>>>>>> keep the configuration spaces separate.
>>>>>> 
>>>>>> But, now we would have 'savanna-configs' (or another name), 'configs',
>>>>>> 'params', and 'args'.  Really? Just how many different types of values
>>>>>> can we come up with? :)
>>>>>> 
>>>>>> I lean away from this approach.
>>>>>> 
>>>>>> Related: breaking up the superset
>>>>>> ---------------------------------
>>>>>> 
>>>>>> It is also the case that not every job type has every value type.
>>>>>> 
>>>>>>           Configs   Params    Args
>>>>>> Hive            Y         Y        N
>>>>>> Pig             Y         Y        Y
>>>>>> MapReduce       Y         N        N
>>>>>> Java            Y         N        Y
>>>>>> 
>>>>>> So do we make that explicit in the docs and enforce it in the api with
>>>>>> errors?
>>>>>> 
>>>>>> Thoughts? I'm sure there are some :)
>>>>>> 
>>>>>> Best,
>>>>>> 
>>>>>> Trevor
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> _______________________________________________
>>>>>> OpenStack-dev mailing list
>>>>>> OpenStack-dev at lists.openstack.org
>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>> 
>>>>> 
>>>>> _______________________________________________
>>>>> OpenStack-dev mailing list
>>>>> OpenStack-dev at lists.openstack.org
>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>> 
>>>> 
>>>> 
>>>> _______________________________________________
>>>> OpenStack-dev mailing list
>>>> OpenStack-dev at lists.openstack.org
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> 
>>> 
>>> 
>>> _______________________________________________
>>> OpenStack-dev mailing list
>>> OpenStack-dev at lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>> 
>> --
>> CONFIDENTIALITY NOTICE
>> NOTICE: This message is intended for the use of the individual or entity to
>> which it is addressed and may contain information that is confidential,
>> privileged and exempt from disclosure under applicable law. If the reader
>> of this message is not the intended recipient, you are hereby notified that
>> any printing, copying, dissemination, distribution, disclosure or
>> forwarding of this communication is strictly prohibited. If you have
>> received this communication in error, please contact the sender immediately
>> and delete it from your system. Thank You.
>> 
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
> 
> 
> 
> --
> Sincerely yours,
> Sergey Lukjanov
> Savanna Technical Lead
> Mirantis Inc.
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140129/d5dd762d/attachment-0001.html>
> 
> ------------------------------
> 
> Message: 25
> Date: Wed, 29 Jan 2014 23:44:34 +0800
> From: Yingjun Li <liyingjun1988 at gmail.com>
> To: "OpenStack Development Mailing List (not for usage questions)"
>        <openstack-dev at lists.openstack.org>
> Subject: Re: [openstack-dev] Nova V2 Quota API
> Message-ID: <39F384E2-F5AE-493D-A0FC-3A9B3DD13431 at gmail.com>
> Content-Type: text/plain; charset="iso-8859-1"
> 
> I reported a bug here: https://bugs.launchpad.net/openstack-manuals/+bug/1274153
> 
> On Jan 29, 2014, at 23:33, Anne Gentle <anne at openstack.org> wrote:
> 
>> Hi can you point out where you're seeing documentation for the first without tenant_id?
>> 
>> At http://api.openstack.org/api-ref-compute-ext.html#ext-os-quota-sets only the tenant_id is documented.
>> 
>> This is documented identically at http://docs.openstack.org/api/openstack-compute/2/content/ext-os-quota-sets.html
>> 
>> Let us know where you're seeing the misleading documentation so we can log a bug and fix it.
>> Anne
>> 
>> 
>> On Wed, Jan 29, 2014 at 8:48 AM, Vinod Kumar Boppanna <vinod.kumar.boppanna at cern.ch> wrote:
>> Hi,
>> 
>> In the Documentation, it was mentioned that there are two API's to see the quotas of a tenant.
>> 
>> 1. v2/{tenant_id}/os-quota-sets - Shows quotas for a tenant
>> 
>> 2. v2/{tenant_id}/os-quota-sets/{tenant_id}/{user_id} - Enables an admin to show quotas for a specified tenant and a user
>> 
>> I guess the first API can be used by a member in a tenant to get the quotas of that tenant. The second one can be run by admin to get the quotas of any tenant or any user.
>> 
>> But through normal user when i am running any of the below (after authentication)
>> 
>> $> nova --debug quota-show --tenant <tenant_id>    (tenant id of a project in which this user is member)
>> It is calling the second API i.e  v2/{tenant_id}/os-quota-sets/{tenant_id}
>> 
>> or even when i am calling directly the API
>> 
>> $>  curl -i -H"X-Auth-Token:$TOKEN" -H "Content-type: application/json" http://localhost:8774/v2/<tenant_id>/os-quota-sets/
>> It says the "Resource not found".
>> 
>> So, Is the first API is available?
>> 
>> Regards,
>> Vinod Kumar Boppanna
>> 
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>> 
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140129/1fd42920/attachment-0001.html>
> 
> ------------------------------
> 
> Message: 26
> Date: Wed, 29 Jan 2014 10:26:19 -0500
> From: Robert Kukura <rkukura at redhat.com>
> To: OpenStack Development Mailing List
>        <openstack-dev at lists.openstack.org>
> Cc: "Kyle Mestery \(kmestery\)" <kmestery at cisco.com>
> Subject: [openstack-dev] [nova][neutron][ml2] Proposal to support VIF
>        security, PCI-passthru/SR-IOV, and other binding-specific data
> Message-ID: <52E91D9B.3010705 at redhat.com>
> Content-Type: text/plain; charset=ISO-8859-1
> 
> The neutron patch [1] and nova patch [2], proposed to resolve the
> "get_firewall_required should use VIF parameter from neutron" bug [3],
> replace the binding:capabilities attribute in the neutron portbindings
> extension with a new binding:vif_security attribute that is a dictionary
> with several keys defined to control VIF security. When using the ML2
> plugin, this binding:vif_security attribute flows from the bound
> MechanismDriver to nova's GenericVIFDriver.
> 
> Separately, work on PCI-passthru/SR-IOV for ML2 also requires
> binding-specific information to flow from the bound MechanismDriver to
> nova's GenericVIFDriver. See [4] for links to various documents and BPs
> on this.
> 
> A while back, in reviewing [1], I suggested a general mechanism to allow
> ML2 MechanismDrivers to supply arbitrary port attributes in order to
> meet both the above requirements. That approach was incorporated into
> [1] and has been cleaned up and generalized a bit in [5].
> 
> I'm now becoming convinced that proliferating new port attributes for
> various data passed from the neutron plugin (the bound MechanismDriver
> in the case of ML2) to nova's GenericVIFDriver is not such a great idea.
> One issue is that adding attributes keeps changing the API, but this
> isn't really a user-facing API. Another is that all ports should have
> the same set of attributes, so the plugin still has to be able to supply
> those attributes when a bound MechanismDriver does not supply them. See [5].
> 
> Instead, I'm proposing here that the binding:vif_security attribute
> proposed in [1] and [2] be renamed binding:vif_details, and used to
> transport whatever data needs to flow from the neutron plugin (i.e.
> ML2's bound MechanismDriver) to the nova GenericVIFDriver. This same
> dictionary attribute would be able to carry the VIF security key/value
> pairs defined in [1], those needed for [4], as well as any needed for
> future GenericVIFDriver features. The set of key/value pairs in
> binding:vif_details that apply would depend on the value of
> binding:vif_type.
> 
> If this proposal is agreed to, I can quickly write a neutron BP covering
> this and provide a generic implementation for ML2. Then [1] and [2]
> could be updated to use binding:vif_details for the VIF security data
> and eliminate the existing binding:capabilities attribute.
> 
> If we take this proposed approach of using binding:vif_details, the
> internal ML2 handling of binding:vif_type and binding:vif_details could
> either take the approach used for binding:vif_type and
> binding:capabilities in the current code, where the values are stored in
> the port binding DB table. Or they could take the approach in [5] where
> they are obtained from bound MechanismDriver when needed. Comments on
> these options are welcome.
> 
> Please provide feedback on this proposal and the various options in this
> email thread and/or at today's ML2 sub-team meeting.
> 
> Thanks,
> 
> -Bob
> 
> [1] https://review.openstack.org/#/c/21946/
> [2] https://review.openstack.org/#/c/44596/
> [3] https://bugs.launchpad.net/nova/+bug/1112912
> [4] https://wiki.openstack.org/wiki/Meetings/Passthrough
> [5] https://review.openstack.org/#/c/69783/
> 
> 
> 
> 
> ------------------------------
> 
> Message: 27
> Date: Wed, 29 Jan 2014 16:50:36 +0100
> From: Swann Croiset <swannon at gmail.com>
> To: "OpenStack Development Mailing List (not for usage questions)"
>        <openstack-dev at lists.openstack.org>
> Subject: [openstack-dev] [Heat] [Nova] [oslo] [Ceilometer] about
>        notifications : huge and may be non secure
> Message-ID:
>        <CAEjdo88EJX8hT1F_g9dGkEknNFukHxZKxkPufOORcvxYwKbBhg at mail.gmail.com>
> Content-Type: text/plain; charset="iso-8859-1"
> 
> Hi stackers,
> 
> I would like to share my wonder here about Notifications.
> 
> I'm working [1] on Heat notifications and I noticed that :
> 1/ Heat uses his context to store 'password'
> 2/ Heat and Nova store 'auth_token' in context too. Didn't check for other
> projects except for neutron which doesn't store auth_token
> 
> These infos are consequently sent thru their notifications.
> 
> I guess we consider the broker as securised and network communications with
> services too BUT
> should not we delete these data anyway since IIRC they are never in use (at
> least by ceilometer) and by the way
> throwing it away the security question ?
> 
> My other concern is the size (Kb) of notifications : 70% for auth_token
> (with pki) !
> We can reduce the volume drastically and easily by deleting these data from
> notifications.
> I know that RabbitMQ (or others) is very robust and can handle this volume
> but when I see this kind of improvements, I'am tempted to do it.
> 
> I see an easy way to fix that in oslo-incubator [2] :
> delete keys of context if existing, config driven with "password" and
> "auth_token" by default
> 
> thoughts?
> 
> [1]
> https://blueprints.launchpad.net/ceilometer/+spec/handle-heat-notifications
> [2]
> https://github.com/openstack/oslo-incubator/blob/master/openstack/common/notifier/rpc_notifier.py
> and others
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140129/88facbcb/attachment-0001.html>
> 
> ------------------------------
> 
> Message: 28
> Date: Wed, 29 Jan 2014 07:56:20 -0800
> From: Vishvananda Ishaya <vishvananda at gmail.com>
> To: "OpenStack Development Mailing List (not for usage questions)"
>        <openstack-dev at lists.openstack.org>
> Subject: Re: [openstack-dev] [Nova] bp proposal: discovery of peer
>        instances       through metadata service
> Message-ID: <42DFC5E1-2BCB-455C-98B7-2C8A34E5C4F0 at gmail.com>
> Content-Type: text/plain; charset="us-ascii"
> 
> 
> On Jan 29, 2014, at 5:26 AM, Justin Santa Barbara <justin at fathomdb.com> wrote:
> 
>> Certainly my original inclination (and code!) was to agree with you Vish, but:
>> 
>> 1) It looks like we're going to have writable metadata anyway, for
>> communication from the instance to the API.
>> 2) I believe the restrictions make it impractical to abuse it as a
>> message-bus: size-limits, quotas and write-once make it very poorly
>> suited for anything queue like.
>> 3) Anything that isn't opt-in will likely have security implications
>> which means that it won't get deployed.  This must be deployed to be
>> useful.
> 
> Fair enough. I agree that there are significant enough security implications
> to skip the simple version.
> 
> Vish
> 
>> 
>> In short: I agree that it's not the absolute ideal solution (for me,
>> that would be no opt-in), but it feels like the best solution given
>> that we must have opt-in, or else e.g. HP won't deploy it.  It uses a
>> (soon to be) existing mechanism, and is readily extensible without
>> breaking APIs.
>> 
>> On your idea of scoping by security group, I believe a certain someone
>> is looking at supporting hierarchical projects, so we will likely need
>> to support more advanced logic here later anyway.  For example:  the
>> ability to specify whether an entry should be shared with instances in
>> child projects.  This will likely take the form of a sort of selector
>> language, so I anticipate we could offer a filter on security groups
>> as well if this is useful.  We might well also allow selection by
>> instance tags.  The approach allows this, though I would like to keep
>> it as simple as possible at first (share with other instances in
>> project or don't share)
>> 
>> Justin
>> 
>> 
>> On Tue, Jan 28, 2014 at 10:39 PM, Vishvananda Ishaya
>> <vishvananda at gmail.com> wrote:
>>> 
>>> On Jan 28, 2014, at 12:17 PM, Justin Santa Barbara <justin at fathomdb.com> wrote:
>>> 
>>>> Thanks John - combining with the existing effort seems like the right
>>>> thing to do (I've reached out to Claxton to coordinate).  Great to see
>>>> that the larger issues around quotas / write-once have already been
>>>> agreed.
>>>> 
>>>> So I propose that sharing will work in the same way, but some values
>>>> are visible across all instances in the project.  I do not think it
>>>> would be appropriate for all entries to be shared this way.  A few
>>>> options:
>>>> 
>>>> 1) A separate endpoint for shared values
>>>> 2) Keys are shared iff  e.g. they start with a prefix, like 'peers_XXX'
>>>> 3) Keys are set the same way, but a 'shared' parameter can be passed,
>>>> either as a query parameter or in the JSON.
>>>> 
>>>> I like option #3 the best, but feedback is welcome.
>>>> 
>>>> I think I will have to store the value using a system_metadata entry
>>>> per shared key.  I think this avoids issues with concurrent writes,
>>>> and also makes it easier to have more advanced sharing policies (e.g.
>>>> when we have hierarchical projects)
>>>> 
>>>> Thank you to everyone for helping me get to what IMHO is a much better
>>>> solution than the one I started with!
>>>> 
>>>> Justin
>>> 
>>> I am -1 on the post data. I think we should avoid using the metadata service
>>> as a cheap queue for communicating across vms and this moves strongly in
>>> that direction.
>>> 
>>> I am +1 on providing a list of ip addresses in the current security group(s)
>>> via metadata. I like limiting by security group instead of project because
>>> this could prevent the 1000 instance case where people have large shared
>>> tenants and it also provides a single tenant a way to have multiple autodiscoverd
>>> services. Also the security group info is something that neutron has access
>>> to so the neutron proxy should be able to generate the necessary info if
>>> neutron is in use.
>>> 
>>> Just as an interesting side note, we put this vm list in way back in the NASA
>>> days as an easy way to get mpi clusters running. In this case we grouped the
>>> instances by the key_name used to launch the instance instead of security group.
>>> I don't think it occurred to us to use security groups at the time.  Note we
>>> also provided the number of cores, but this was for convienience because the
>>> mpi implementation didn't support discovering number of cores. Code below.
>>> 
>>> Vish
>>> 
>>> $ git show 2cf40bb3
>>> commit 2cf40bb3b21d33f4025f80d175a4c2ec7a2f8414
>>> Author: Vishvananda Ishaya <vishvananda at yahoo.com>
>>> Date:   Thu Jun 24 04:11:54 2010 +0100
>>> 
>>>   Adding mpi data
>>> 
>>> diff --git a/nova/endpoint/cloud.py b/nova/endpoint/cloud.py
>>> index 8046d42..74da0ee 100644
>>> --- a/nova/endpoint/cloud.py
>>> +++ b/nova/endpoint/cloud.py
>>> @@ -95,8 +95,21 @@ class CloudController(object):
>>>    def get_instance_by_ip(self, ip):
>>>        return self.instdir.by_ip(ip)
>>> 
>>> +    def _get_mpi_data(self, project_id):
>>> +        result = {}
>>> +        for node_name, node in self.instances.iteritems():
>>> +            for instance in node.values():
>>> +                if instance['project_id'] == project_id:
>>> +                    line = '%s slots=%d' % (instance['private_dns_name'], instance.get('vcpus', 0))
>>> +                    if instance['key_name'] in result:
>>> +                        result[instance['key_name']].append(line)
>>> +                    else:
>>> +                        result[instance['key_name']] = [line]
>>> +        return result
>>> +
>>>    def get_metadata(self, ip):
>>>        i = self.get_instance_by_ip(ip)
>>> +        mpi = self._get_mpi_data(i['project_id'])
>>>        if i is None:
>>>            return None
>>>        if i['key_name']:
>>> @@ -135,7 +148,8 @@ class CloudController(object):
>>>                'public-keys' : keys,
>>>                'ramdisk-id': i.get('ramdisk_id', ''),
>>>                'reservation-id': i['reservation_id'],
>>> -                'security-groups': i.get('groups', '')
>>> +                'security-groups': i.get('groups', ''),
>>> +                'mpi': mpi
>>>            }
>>>        }
>>>        if False: # TODO: store ancestor ids
>>> 
>>>> 
>>>> 
>>>> 
>>>> 
>>>> On Tue, Jan 28, 2014 at 4:38 AM, John Garbutt <john at johngarbutt.com> wrote:
>>>>> On 27 January 2014 14:52, Justin Santa Barbara <justin at fathomdb.com> wrote:
>>>>>> Day, Phil wrote:
>>>>>> 
>>>>>>> 
>>>>>>>>> We already have a mechanism now where an instance can push metadata as
>>>>>>>>> a way of Windows instances sharing their passwords - so maybe this
>>>>>>>>> could
>>>>>>>>> build on that somehow - for example each instance pushes the data its
>>>>>>>>> willing to share with other instances owned by the same tenant ?
>>>>>>>> 
>>>>>>>> I do like that and think it would be very cool, but it is much more
>>>>>>>> complex to
>>>>>>>> implement I think.
>>>>>>> 
>>>>>>> I don't think its that complicated - just needs one extra attribute stored
>>>>>>> per instance (for example into instance_system_metadata) which allows the
>>>>>>> instance to be included in the list
>>>>>> 
>>>>>> 
>>>>>> Ah - OK, I think I better understand what you're proposing, and I do like
>>>>>> it.  The hardest bit of having the metadata store be full read/write would
>>>>>> be defining what is and is not allowed (rate-limits, size-limits, etc).  I
>>>>>> worry that you end up with a new key-value store, and with per-instance
>>>>>> credentials.  That would be a separate discussion: this blueprint is trying
>>>>>> to provide a focused replacement for multicast discovery for the cloud.
>>>>>> 
>>>>>> But: thank you for reminding me about the Windows password though...  It may
>>>>>> provide a reasonable model:
>>>>>> 
>>>>>> We would have a new endpoint, say 'discovery'.  An instance can POST a
>>>>>> single string value to the endpoint.  A GET on the endpoint will return any
>>>>>> values posted by all instances in the same project.
>>>>>> 
>>>>>> One key only; name not publicly exposed ('discovery_datum'?); 255 bytes of
>>>>>> value only.
>>>>>> 
>>>>>> I expect most instances will just post their IPs, but I expect other uses
>>>>>> will be found.
>>>>>> 
>>>>>> If I provided a patch that worked in this way, would you/others be on-board?
>>>>> 
>>>>> I like that idea. Seems like a good compromise. I have added my review
>>>>> comments to the blueprint.
>>>>> 
>>>>> We have this related blueprints going on, setting metadata on a
>>>>> particular server, rather than a group:
>>>>> https://blueprints.launchpad.net/nova/+spec/metadata-service-callbacks
>>>>> 
>>>>> It is limiting things using the existing Quota on metadata updates.
>>>>> 
>>>>> It would be good to agree a similar format between the two.
>>>>> 
>>>>> John
>>>>> 
>>>>> _______________________________________________
>>>>> OpenStack-dev mailing list
>>>>> OpenStack-dev at lists.openstack.org
>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>> 
>>>> _______________________________________________
>>>> OpenStack-dev mailing list
>>>> OpenStack-dev at lists.openstack.org
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> 
>>> 
>>> _______________________________________________
>>> OpenStack-dev mailing list
>>> OpenStack-dev at lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> 
>> 
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> -------------- next part --------------
> A non-text attachment was scrubbed...
> Name: signature.asc
> Type: application/pgp-signature
> Size: 455 bytes
> Desc: Message signed with OpenPGP using GPGMail
> URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140129/f570dc6f/attachment-0001.pgp>
> 
> ------------------------------
> 
> Message: 29
> Date: Wed, 29 Jan 2014 07:59:04 -0800
> From: Vishvananda Ishaya <vishvananda at gmail.com>
> To: "OpenStack Development Mailing List (not for usage questions)"
>        <openstack-dev at lists.openstack.org>
> Subject: Re: [openstack-dev] Hierarchicical Multitenancy Discussion
> Message-ID: <C20E64E3-48CA-44A1-BAF3-4D98386C9556 at gmail.com>
> Content-Type: text/plain; charset="windows-1252"
> 
> I apologize for the confusion. The Wiki time of 2100 UTC is the correct time (Noon Pacific time). We can move tne next meeting to a different day/time that is more convienient for Europe.
> 
> Vish
> 
> 
> On Jan 29, 2014, at 1:56 AM, Florent Flament <florent.flament-ext at cloudwatt.com> wrote:
> 
>> Hi Vishvananda,
>> 
>> I would be interested in such a working group.
>> Can you please confirm the meeting hour for this Friday ?
>> I've seen 1600 UTC in your email and 2100 UTC in the wiki (https://wiki.openstack.org/wiki/Meetings#Hierarchical_Multitenancy_Meeting ). As I'm in Europe I'd prefer 1600 UTC.
>> 
>> Florent Flament
>> 
>> ----- Original Message -----
>> From: "Vishvananda Ishaya" <vishvananda at gmail.com>
>> To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org>
>> Sent: Tuesday, January 28, 2014 7:35:15 PM
>> Subject: [openstack-dev] Hierarchicical Multitenancy Discussion
>> 
>> Hi Everyone,
>> 
>> I apologize for the obtuse title, but there isn't a better succinct term to describe what is needed. OpenStack has no support for multiple owners of objects. This means that a variety of private cloud use cases are simply not supported. Specifically, objects in the system can only be managed on the tenant level or globally.
>> 
>> The key use case here is to delegate administration rights for a group of tenants to a specific user/role. There is something in Keystone called a ?domain? which supports part of this functionality, but without support from all of the projects, this concept is pretty useless.
>> 
>> In IRC today I had a brief discussion about how we could address this. I have put some details and a straw man up here:
>> 
>> https://wiki.openstack.org/wiki/HierarchicalMultitenancy
>> 
>> I would like to discuss this strawman and organize a group of people to get actual work done by having an irc meeting this Friday at 1600UTC. I know this time is probably a bit tough for Europe, so if we decide we need a regular meeting to discuss progress then we can vote on a better time for this meeting.
>> 
>> https://wiki.openstack.org/wiki/Meetings#Hierarchical_Multitenancy_Meeting
>> 
>> Please note that this is going to be an active team that produces code. We will *NOT* spend a lot of time debating approaches, and instead focus on making something that works and learning as we go. The output of this team will be a MultiTenant devstack install that actually works, so that we can ensure the features we are adding to each project work together.
>> 
>> Vish
>> 
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> -------------- next part --------------
> A non-text attachment was scrubbed...
> Name: signature.asc
> Type: application/pgp-signature
> Size: 455 bytes
> Desc: Message signed with OpenPGP using GPGMail
> URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140129/2347f75a/attachment-0001.pgp>
> 
> ------------------------------
> 
> Message: 30
> Date: Wed, 29 Jan 2014 16:03:03 +0000
> From: "Robert Li (baoli)" <baoli at cisco.com>
> To: Irena Berezovsky <irenab at mellanox.com>, "rkukura at redhat.com"
>        <rkukura at redhat.com>, "Sandhya Dasu (sadasu)" <sadasu at cisco.com>,
>        "OpenStack Development Mailing List (not for usage questions)"
>        <openstack-dev at lists.openstack.org>
> Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV on
>        Jan. 29th
> Message-ID: <CF0E8E31.3D7ED%baoli at cisco.com>
> Content-Type: text/plain; charset="windows-1252"
> 
> Hi Irena,
> 
> I'm now even more confused. I must have missed something. See inline?.
> 
> Thanks,
> Robert
> 
> On 1/29/14 10:19 AM, "Irena Berezovsky" <irenab at mellanox.com<mailto:irenab at mellanox.com>> wrote:
> 
> Hi Robert,
> I think that I can go with Bob?s suggestion, but think it makes sense to cover the vnic_type and PCI-passthru via two separate patches. Adding vnic_type will probably impose changes to existing Mech. Drivers while PCI-passthru is about introducing some pieces for new SRIOV supporting Mech. Drivers.
> 
> More comments inline
> 
> BR,
> IRena
> 
> From: Robert Li (baoli) [mailto:baoli at cisco.com]
> Sent: Wednesday, January 29, 2014 4:47 PM
> To: Irena Berezovsky; rkukura at redhat.com<mailto:rkukura at redhat.com>; Sandhya Dasu (sadasu); OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV on Jan. 29th
> 
> Hi folks,
> 
> I'd like to do a recap on today's meeting, and if possible we should continue the discussion in this thread so that we can be more productive in tomorrow's meeting.
> 
> Bob suggests that we have these BPs:
> One generic covering implementing binding:profile in ML2, and one specific to PCI-passthru, defining the vnic-type (wherever it goes) and any keys for binding:profile.
> 
> 
> Irena suggests that we have three BPs:
> 1. generic ML2 support for binding:profile (corresponding to Bob's covering implementing binding:profile in ML2 ?)
> 2. add vnic_type support for binding Mech Drivers in ML2 plugin
> 3. support PCI slot via profile (corresponding to Bob's any keys for binding:profile ?)
> 
> Both proposals sound similar, so it's great that we are converging. I think that it's important that we put more details in each BP on what's exactly covered by it. One question I have is about where binding:profile will be implemented. I see that portbinding is defined/implemented under its extension and neutron.db. So when both of you guys are saying that implementing binding:profile in ML2, I'm kind of confused. Please let me know what I'm missing here. My understanding is that non-ML2 plugin can use it as well.
> [IrenaB] Basically you  are right. Currently ML2 does not inherit the DB Mixin for port binding. So it supports the port binding extension, but uses its own DB table to store relevant attributes. Making it work for ML2 means not adding this support to PortBindingMixin.
> 
> [ROBERT] does that mean binding:profile for PCI can't be used by non-ml2 plugin?
> 
> Another issue that came up during the meeting is about whether or not vnic-type should be part of the top level binding or part of binding:profile. In other words, should it be defined as binding:vnic-type or binding:profile:vnic-type.
> [IrenaB] As long as existing binding capable Mech Drivers will take vnic_type into its consideration, I guess doing it via binding:profile will introduce less changes all over (CLI, API). But I am not sure this reason is strong enough to choose this direction
> We also need one or two BPs to cover the change in the neutron port-create/port-show CLI/API.
> [IrenaB] binding:profile is already supported, so it probably depends on direction with vnic_type
> 
> [ROBERT] Can you let me know where in the code binding:profile is supported? in portbindings_db.py, the PortBindingPort model doesn't have a column for binding:profile. So I guess that I must have missed it.
> Regarding BPs for the CLI/API, we are planning to add vnic-type and profileid in the CLI, also the new keys in binding:profile. Are you saying no changes are needed (say display them, interpret the added cli arguments, etc), therefore no new BPs are needed for them?
> 
> Another thing is that we need to define the binding:profile dictionary.
> [IrenaB] With regards to PCI SRIOV related attributes, right?
> 
> [ROBERT] yes.
> 
> 
> Thanks,
> Robert
> 
> 
> 
> On 1/29/14 4:02 AM, "Irena Berezovsky" <irenab at mellanox.com<mailto:irenab at mellanox.com>> wrote:
> 
> Will attend
> 
> From: Robert Li (baoli) [mailto:baoli at cisco.com]
> Sent: Wednesday, January 29, 2014 12:55 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: [openstack-dev] [nova][neutron] PCI pass-through SRIOV on Jan. 29th
> 
> Hi Folks,
> 
> Can we have one more meeting tomorrow? I'd like to discuss the blueprints we are going to have and what each BP will be covering.
> 
> thanks,
> Robert
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140129/77fbd291/attachment.html>
> 
> ------------------------------
> 
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> End of OpenStack-dev Digest, Vol 21, Issue 92
> *********************************************
> 
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 455 bytes
Desc: Message signed with OpenPGP using GPGMail
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140129/65c28a74/attachment.pgp>


More information about the OpenStack-dev mailing list