[openstack-dev] [Ironic] Some questions about Ironic service
xianchaobo
xianchaobo at huawei.com
Thu Dec 11 02:09:03 UTC 2014
Hi,Fox Kevin M
Thanks for your help.
Also,I want to know whether these features will be implemented in Ironic?
Do we have a plan to implement them?
Thanks
Xianchaobo
-----邮件原件-----
发件人: openstack-dev-request at lists.openstack.org [mailto:openstack-dev-request at lists.openstack.org]
发送时间: 2014年12月9日 18:36
收件人: openstack-dev at lists.openstack.org
主题: OpenStack-dev Digest, Vol 32, Issue 25
Send OpenStack-dev mailing list submissions to
openstack-dev at lists.openstack.org
To subscribe or unsubscribe via the World Wide Web, visit
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
or, via email, send a message with subject or body 'help' to
openstack-dev-request at lists.openstack.org
You can reach the person managing the list at
openstack-dev-owner at lists.openstack.org
When replying, please edit your Subject line so it is more specific
than "Re: Contents of OpenStack-dev digest..."
Today's Topics:
1. [Mistral] Query on creating multiple resources (Sushma Korati)
2. Re: [neutron] Changes to the core team
(trinath.somanchi at freescale.com)
3. [Neutron][OVS] ovs-ofctl-to-python blueprint (YAMAMOTO Takashi)
4. Re: [api] Using query string or request body to pass
parameter (Alex Xu)
5. [Ironic] Some questions about Ironic service (xianchaobo)
6. [Ironic] How to get past pxelinux.0 bootloader? (Peeyush Gupta)
7. Re: [neutron] Changes to the core team (Gariganti, Sudhakar Babu)
8. Re: [neutron][lbaas] Shared Objects in LBaaS - Use Cases that
led us to adopt this. (Samuel Bercovici)
9. [Mistral] Action context passed to all action executions by
default (W Chan)
10. Cross-Project meeting, Tue December 9th, 21:00 UTC
(Thierry Carrez)
11. Re: [Mistral] Query on creating multiple resources
(Renat Akhmerov)
12. Re: [Mistral] Query on creating multiple resources
(Renat Akhmerov)
13. Re: [Mistral] Event Subscription (Renat Akhmerov)
14. Re: [Mistral] Action context passed to all action executions
by default (Renat Akhmerov)
15. Re: Cross-Project meeting, Tue December 9th, 21:00 UTC (joehuang)
16. Re: [Ironic] Some questions about Ironic service (Fox, Kevin M)
17. [Nova][Neutron] out-of-tree plugin for Mech driver/L2 and
vif_driver (Maxime Leroy)
18. Re: [Ironic] How to get past pxelinux.0 bootloader? (Fox, Kevin M)
19. Re: [Ironic] Fuel agent proposal (Roman Prykhodchenko)
20. Re: [Ironic] How to get past pxelinux.0 bootloader?
(Peeyush Gupta)
21. Re: Cross-Project meeting, Tue December 9th, 21:00 UTC
(Thierry Carrez)
22. [neutron] mid-cycle "hot reviews" (Miguel ?ngel Ajo)
23. Re: [horizon] REST and Django (Tihomir Trifonov)
----------------------------------------------------------------------
Message: 1
Date: Tue, 9 Dec 2014 05:57:35 +0000
From: Sushma Korati <sushma_korati at persistent.com>
To: "gokrokvertskhov at mirantis.com" <gokrokvertskhov at mirantis.com>,
"zbitter at redhat.com" <zbitter at redhat.com>
Cc: "openstack-dev at lists.openstack.org"
<openstack-dev at lists.openstack.org>
Subject: [openstack-dev] [Mistral] Query on creating multiple
resources
Message-ID: <1418105060569.62922 at persistent.com>
Content-Type: text/plain; charset="iso-8859-1"
Hi,
Thank you guys.
Yes I am able to do this with heat, but I faced issues while trying the same with mistral.
As suggested will try with the latest mistral branch. Thank you once again.
Regards,
Sushma
________________________________
From: Georgy Okrokvertskhov [mailto:gokrokvertskhov at mirantis.com]
Sent: Tuesday, December 09, 2014 6:07 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Mistral] Query on creating multiple resources
Hi Sushma,
Did you explore Heat templates? As Zane mentioned you can do this via Heat template without writing any workflows.
Do you have any specific use cases which you can't solve with Heat template?
Create VM workflow was a demo example. Mistral potentially can be used by Heat or other orchestration tools to do actual interaction with API, but for user it might be easier to use Heat functionality.
Thanks,
Georgy
On Mon, Dec 8, 2014 at 7:54 AM, Nikolay Makhotkin <nmakhotkin at mirantis.com<mailto:nmakhotkin at mirantis.com>> wrote:
Hi, Sushma!
Can we create multiple resources using a single task, like multiple keypairs or security-groups or networks etc?
Yes, we can. This feature is in the development now and it is considered as experimental - https://blueprints.launchpad.net/mistral/+spec/mistral-dataflow-collections
Just clone the last master branch from mistral.
You can specify "for-each" task property and provide the array of data to your workflow:
--------------------
version: '2.0'
name: secgroup_actions
workflows:
create_security_group:
type: direct
input:
- array_with_names_and_descriptions
tasks:
create_secgroups:
for-each:
data: $.array_with_names_and_descriptions
action: nova.security_groups_create name={$.data.name<http://data.name>} description={$.data.description}
------------
On Mon, Dec 8, 2014 at 6:36 PM, Zane Bitter <zbitter at redhat.com<mailto:zbitter at redhat.com>> wrote:
On 08/12/14 09:41, Sushma Korati wrote:
Can we create multiple resources using a single task, like multiple
keypairs or security-groups or networks etc?
Define them in a Heat template and create the Heat stack as a single task.
- ZB
_______________________________________________
OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org<mailto:OpenStack-dev at lists.openstack.org>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
--
Best Regards,
Nikolay
_______________________________________________
OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org<mailto:OpenStack-dev at lists.openstack.org>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
--
Georgy Okrokvertskhov
Architect,
OpenStack Platform Products,
Mirantis
http://www.mirantis.com<http://www.mirantis.com/>
Tel. +1 650 963 9828
Mob. +1 650 996 3284
DISCLAIMER
==========
This e-mail may contain privileged and confidential information which is the property of Persistent Systems Ltd. It is intended only for the use of the individual or entity to which it is addressed. If you are not the intended recipient, you are not authorized to read, retain, copy, print, distribute or use this message. If you have received this communication in error, please notify the sender and delete all copies of this message. Persistent Systems Ltd. does not accept any liability for virus infected mails.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20141209/420872fd/attachment-0001.html>
------------------------------
Message: 2
Date: Tue, 9 Dec 2014 05:57:44 +0000
From: "trinath.somanchi at freescale.com"
<trinath.somanchi at freescale.com>
To: "OpenStack Development Mailing List (not for usage questions)"
<openstack-dev at lists.openstack.org>
Subject: Re: [openstack-dev] [neutron] Changes to the core team
Message-ID:
<BN1PR03MB153E9E8A2FB42D92649EA0897650 at BN1PR03MB153.namprd03.prod.outlook.com>
Content-Type: text/plain; charset="utf-8"
Congratulation Kevin and Henry ?
--
Trinath Somanchi - B39208
trinath.somanchi at freescale.com | extn: 4048
From: Kyle Mestery [mailto:mestery at mestery.com]
Sent: Monday, December 08, 2014 11:32 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] Changes to the core team
On Tue, Dec 2, 2014 at 9:59 AM, Kyle Mestery <mestery at mestery.com<mailto:mestery at mestery.com>> wrote:
Now that we're in the thick of working hard on Kilo deliverables, I'd
like to make some changes to the neutron core team. Reviews are the
most important part of being a core reviewer, so we need to ensure
cores are doing reviews. The stats for the 180 day period [1] indicate
some changes are needed for cores who are no longer reviewing.
First of all, I'm proposing we remove Bob Kukura and Nachi Ueno from
neutron-core. Bob and Nachi have been core members for a while now.
They have contributed to Neutron over the years in reviews, code and
leading sub-teams. I'd like to thank them for all that they have done
over the years. I'd also like to propose that should they start
reviewing more going forward the core team looks to fast track them
back into neutron-core. But for now, their review stats place them
below the rest of the team for 180 days.
As part of the changes, I'd also like to propose two new members to
neutron-core: Henry Gessau and Kevin Benton. Both Henry and Kevin have
been very active in reviews, meetings, and code for a while now. Henry
lead the DB team which fixed Neutron DB migrations during Juno. Kevin
has been actively working across all of Neutron, he's done some great
work on security fixes and stability fixes in particular. Their
comments in reviews are insightful and they have helped to onboard new
reviewers and taken the time to work with people on their patches.
Existing neutron cores, please vote +1/-1 for the addition of Henry
and Kevin to the core team.
Enough time has passed now, and Kevin and Henry have received enough +1 votes. So I'd like to welcome them to the core team!
Thanks,
Kyle
Thanks!
Kyle
[1] http://stackalytics.com/report/contribution/neutron-group/180
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20141209/5258ee17/attachment-0001.html>
------------------------------
Message: 3
Date: Tue, 9 Dec 2014 14:58:04 +0900 (JST)
From: yamamoto at valinux.co.jp (YAMAMOTO Takashi)
To: openstack-dev at lists.openstack.org
Subject: [openstack-dev] [Neutron][OVS] ovs-ofctl-to-python blueprint
Message-ID: <20141209055804.536E57094A at kuma.localdomain>
Content-Type: Text/Plain; charset=us-ascii
hi,
here's a blueprint to make OVS agent use Ryu to talk with OVS.
https://blueprints.launchpad.net/neutron/+spec/ovs-ofctl-to-python
https://review.openstack.org/#/c/138980/ (kilo spec)
given that ML2/OVS is one of the most popular plugins and the proposal
has a few possible controversial points, i want to ask wider opinions.
- it introduces a new requirement for OVS agent. (Ryu)
- it makes OVS agent require newer OVS version than it currently does.
- what to do for xenapi support is still under investigation/research.
- possible security impact.
please comment on gerrit if you have any opinions. thank you.
YAMAMOTO Takashi
------------------------------
Message: 4
Date: Tue, 9 Dec 2014 14:28:33 +0800
From: Alex Xu <soulxu at gmail.com>
To: "OpenStack Development Mailing List (not for usage questions)"
<openstack-dev at lists.openstack.org>
Subject: Re: [openstack-dev] [api] Using query string or request body
to pass parameter
Message-ID:
<CAH7mGavuTfWON=PgDcMrg9hoNrYViO+e=FdeuEsrWH26OtMxTg at mail.gmail.com>
Content-Type: text/plain; charset="utf-8"
Kevin, thanks for the info! I agree with you. RFC is the authority. use
payload in the DELETE isn't good way.
2014-12-09 7:58 GMT+08:00 Kevin L. Mitchell <kevin.mitchell at rackspace.com>:
> On Tue, 2014-12-09 at 07:38 +0800, Alex Xu wrote:
> > Not sure all, nova is limited
> > at
> https://github.com/openstack/nova/blob/master/nova/api/openstack/wsgi.py#L79
> > That under our control.
>
> It is, but the client frameworks aren't, and some of them prohibit
> sending a body with a DELETE request. Further, RFC7231 has this to say
> about DELETE request bodies:
>
> A payload within a DELETE request message has no defined semantics;
> sending a payload body on a DELETE request might cause some
> existing
> implementations to reject the request.
>
> (?4.3.5)
>
> I think we have to conclude that, if we need a request body, we cannot
> use the DELETE method. We can modify the operation, such as setting a
> "force" flag, with a query parameter on the URI, but a request body
> should be considered out of bounds with respect to DELETE.
>
> > Maybe not just ask question for delete, also for other method.
> >
> > 2014-12-09 1:11 GMT+08:00 Kevin L. Mitchell <
> kevin.mitchell at rackspace.com>:
> > On Mon, 2014-12-08 at 14:07 +0800, Eli Qiao wrote:
> > > I wonder if we can use body in delete, currently , there isn't
> any
> > > case used in v2/v3 api.
> >
> > No, many frameworks raise an error if you try to include a body
> with a
> > DELETE request.
> > --
> > Kevin L. Mitchell <kevin.mitchell at rackspace.com>
> > Rackspace
>
> --
> Kevin L. Mitchell <kevin.mitchell at rackspace.com>
> Rackspace
>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20141209/0de78824/attachment-0001.html>
------------------------------
Message: 5
Date: Tue, 9 Dec 2014 06:29:50 +0000
From: xianchaobo <xianchaobo at huawei.com>
To: "openstack-dev at lists.openstack.org"
<openstack-dev at lists.openstack.org>
Cc: "Luohao \(brian\)" <brian.luohao at huawei.com>
Subject: [openstack-dev] [Ironic] Some questions about Ironic service
Message-ID:
<D008C043F9162E45A8153719DED7CC4B33FB930A at szxeml559-mbx.china.huawei.com>
Content-Type: text/plain; charset="us-ascii"
Hello, all
I'm trying to install and configure Ironic service, something confused me.
I create two neutron networks, public network and private network.
Private network is used to deploy physical machines
Public network is used to provide floating ip.
(1) Private network type can be VLAN or VXLAN? (In install guide, the network type is flat)
(2) The network of deployed physical machines can be managed by neutron?
(3) Different tenants can have its own network to manage physical machines?
(4) Does the ironic provide some mechanism for deployed physical machines
to use storage such as shared storage,cinder volume?
Thanks,
XianChaobo
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20141209/c752d0da/attachment-0001.html>
------------------------------
Message: 6
Date: Tue, 09 Dec 2014 12:25:39 +0530
From: Peeyush Gupta <gpeeyush at linux.vnet.ibm.com>
To: "OpenStack Development Mailing List (not for usage questions)"
<openstack-dev at lists.openstack.org>
Subject: [openstack-dev] [Ironic] How to get past pxelinux.0
bootloader?
Message-ID: <54869CEB.4040602 at linux.vnet.ibm.com>
Content-Type: text/plain; charset=ISO-8859-1
Hi all,
So, I have set up a devstack ironic setup for baremetal deployment. I
have been able to deploy a baremetal node successfully using
pxe_ipmitool driver. Now, I am trying to boot a server where I already
have a bootloader i.e. I don't need pxelinux to go and fetch kernel and
initrd images for me. I want to transfer them directly.
I checked out the code and figured out that there are dhcp opts
available, that are modified using pxe_utils.py, changing it didn't
help. Then I moved to ironic.conf, but here also I only see an option to
add pxe_bootfile_name, which is exactly what I want to avoid. Can anyone
please help me with this situation? I don't want to go through
pxelinux.0 bootloader, I just directly want to transfer kernel and
initrd images.
Thanks.
--
Peeyush Gupta
gpeeyush at linux.vnet.ibm.com
------------------------------
Message: 7
Date: Tue, 9 Dec 2014 07:07:20 +0000
From: "Gariganti, Sudhakar Babu" <sudhakar-babu.gariganti at hp.com>
To: "OpenStack Development Mailing List (not for usage questions)"
<openstack-dev at lists.openstack.org>
Subject: Re: [openstack-dev] [neutron] Changes to the core team
Message-ID:
<DB734B7E9E26414E9E6CA4EC5ED3AC2D2446B947 at G6W2498.americas.hpqcorp.net>
Content-Type: text/plain; charset="utf-8"
Congrats Kevin and Henry ?.
From: Kyle Mestery [mailto:mestery at mestery.com]
Sent: Monday, December 08, 2014 11:32 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] Changes to the core team
On Tue, Dec 2, 2014 at 9:59 AM, Kyle Mestery <mestery at mestery.com<mailto:mestery at mestery.com>> wrote:
Now that we're in the thick of working hard on Kilo deliverables, I'd
like to make some changes to the neutron core team. Reviews are the
most important part of being a core reviewer, so we need to ensure
cores are doing reviews. The stats for the 180 day period [1] indicate
some changes are needed for cores who are no longer reviewing.
First of all, I'm proposing we remove Bob Kukura and Nachi Ueno from
neutron-core. Bob and Nachi have been core members for a while now.
They have contributed to Neutron over the years in reviews, code and
leading sub-teams. I'd like to thank them for all that they have done
over the years. I'd also like to propose that should they start
reviewing more going forward the core team looks to fast track them
back into neutron-core. But for now, their review stats place them
below the rest of the team for 180 days.
As part of the changes, I'd also like to propose two new members to
neutron-core: Henry Gessau and Kevin Benton. Both Henry and Kevin have
been very active in reviews, meetings, and code for a while now. Henry
lead the DB team which fixed Neutron DB migrations during Juno. Kevin
has been actively working across all of Neutron, he's done some great
work on security fixes and stability fixes in particular. Their
comments in reviews are insightful and they have helped to onboard new
reviewers and taken the time to work with people on their patches.
Existing neutron cores, please vote +1/-1 for the addition of Henry
and Kevin to the core team.
Enough time has passed now, and Kevin and Henry have received enough +1 votes. So I'd like to welcome them to the core team!
Thanks,
Kyle
Thanks!
Kyle
[1] http://stackalytics.com/report/contribution/neutron-group/180
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20141209/e53c063f/attachment-0001.html>
------------------------------
Message: 8
Date: Tue, 9 Dec 2014 07:28:03 +0000
From: Samuel Bercovici <SamuelB at Radware.com>
To: "OpenStack Development Mailing List (not for usage questions)"
<openstack-dev at lists.openstack.org>
Subject: Re: [openstack-dev] [neutron][lbaas] Shared Objects in LBaaS
- Use Cases that led us to adopt this.
Message-ID:
<F36E8145F2571242A675F56CF609645628965468 at ILMB1.corp.radware.com>
Content-Type: text/plain; charset="utf-8"
Hi,
I agree that the most important thing is to conclude how status properties are being managed and handled so it will remain consistent as we move along.
I am fine with starting with a simple model and expending as need to be.
The L7 implementation is ready waiting for the rest of the model to get in so pool sharing under a listener is something that we should solve now.
I think that pool sharing under listeners connected to the same LB is more common that what you describe.
-Sam.
From: Stephen Balukoff [mailto:sbalukoff at bluebox.net]
Sent: Tuesday, December 09, 2014 12:23 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron][lbaas] Shared Objects in LBaaS - Use Cases that led us to adopt this.
So... I should probably note that I see the case where a user actually shares object as being the exception. I expect that 90% of deployments will never need to share objects, except for a few cases-- those cases (of 1:N) relationships are:
* Loadbalancers must be able to have many Listeners
* When L7 functionality is introduced, L7 policies must be able to refer to the same Pool under a single Listener. (That is to say, sharing Pools under the scope of a single Listener makes sense, but only after L7 policies are introduced.)
I specifically see the following kind of sharing having near zero demand:
* Listeners shared across multiple loadbalancers
* Pools shared across multiple listeners
* Members shared across multiple pools
So, despite the fact that sharing doesn't make status reporting any more or less complex, I'm still in favor of starting with 1:1 relationships between most kinds of objects and then changing those to 1:N or M:N as we get user demand for this. As I said in my first response, allowing too many many to many relationships feels like a solution to a problem that doesn't really exist, and introduces a lot of unnecessary complexity.
Stephen
On Sun, Dec 7, 2014 at 11:43 PM, Samuel Bercovici <SamuelB at radware.com<mailto:SamuelB at radware.com>> wrote:
+1
From: Stephen Balukoff [mailto:sbalukoff at bluebox.net<mailto:sbalukoff at bluebox.net>]
Sent: Friday, December 05, 2014 7:59 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron][lbaas] Shared Objects in LBaaS - Use Cases that led us to adopt this.
German-- but the point is that sharing apparently has no effect on the number of permutations for status information. The only difference here is that without sharing it's more work for the user to maintain and modify trees of objects.
On Fri, Dec 5, 2014 at 9:36 AM, Eichberger, German <german.eichberger at hp.com<mailto:german.eichberger at hp.com>> wrote:
Hi Brandon + Stephen,
Having all those permutations (and potentially testing them) made us lean against the sharing case in the first place. It?s just a lot of extra work for only a small number of our customers.
German
From: Stephen Balukoff [mailto:sbalukoff at bluebox.net<mailto:sbalukoff at bluebox.net>]
Sent: Thursday, December 04, 2014 9:17 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron][lbaas] Shared Objects in LBaaS - Use Cases that led us to adopt this.
Hi Brandon,
Yeah, in your example, member1 could potentially have 8 different statuses (and this is a small example!)... If that member starts flapping, it means that every time it flaps there are 8 notifications being passed upstream.
Note that this problem actually doesn't get any better if we're not sharing objects but are just duplicating them (ie. not sharing objects but the user makes references to the same back-end machine as 8 different members.)
To be honest, I don't see sharing entities at many levels like this being the rule for most of our installations-- maybe a few percentage points of installations will do an excessive sharing of members, but I doubt it. So really, even though reporting status like this is likely to generate a pretty big tree of data, I don't think this is actually a problem, eh. And I don't see sharing entities actually reducing the workload of what needs to happen behind the scenes. (It just allows us to conceal more of this work from the user.)
Stephen
On Thu, Dec 4, 2014 at 4:05 PM, Brandon Logan <brandon.logan at rackspace.com<mailto:brandon.logan at rackspace.com>> wrote:
Sorry it's taken me a while to respond to this.
So I wasn't thinking about this correctly. I was afraid you would have
to pass in a full tree of parent child representations to /loadbalancers
to update anything a load balancer it is associated to (including down
to members). However, after thinking about it, a user would just make
an association call on each object. For Example, associate member1 with
pool1, associate pool1 with listener1, then associate loadbalancer1 with
listener1. Updating is just as simple as updating each entity.
This does bring up another problem though. If a listener can live on
many load balancers, and a pool can live on many listeners, and a member
can live on many pools, there's lot of permutations to keep track of for
status. you can't just link a member's status to a load balancer bc a
member can exist on many pools under that load balancer, and each pool
can exist under many listeners under that load balancer. For example,
say I have these:
lb1
lb2
listener1
listener2
pool1
pool2
member1
member2
lb1 -> [listener1, listener2]
lb2 -> [listener1]
listener1 -> [pool1, pool2]
listener2 -> [pool1]
pool1 -> [member1, member2]
pool2 -> [member1]
member1 can now have a different statuses under pool1 and pool2. since
listener1 and listener2 both have pool1, this means member1 will now
have a different status for listener1 -> pool1 and listener2 -> pool2
combination. And so forth for load balancers.
Basically there's a lot of permutations and combinations to keep track
of with this model for statuses. Showing these in the body of load
balancer details can get quite large.
I hope this makes sense because my brain is ready to explode.
Thanks,
Brandon
On Thu, 2014-11-27 at 08:52 +0000, Samuel Bercovici wrote:
> Brandon, can you please explain further (1) bellow?
>
> -----Original Message-----
> From: Brandon Logan [mailto:brandon.logan at RACKSPACE.COM<mailto:brandon.logan at RACKSPACE.COM>]
> Sent: Tuesday, November 25, 2014 12:23 AM
> To: openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>
> Subject: Re: [openstack-dev] [neutron][lbaas] Shared Objects in LBaaS - Use Cases that led us to adopt this.
>
> My impression is that the statuses of each entity will be shown on a detailed info request of a loadbalancer. The root level objects would not have any statuses. For example a user makes a GET request to /loadbalancers/{lb_id} and the status of every child of that load balancer is show in a "status_tree" json object. For example:
>
> {"name": "loadbalancer1",
> "status_tree":
> {"listeners":
> [{"name": "listener1", "operating_status": "ACTIVE",
> "default_pool":
> {"name": "pool1", "status": "ACTIVE",
> "members":
> [{"ip_address": "10.0.0.1", "status": "OFFLINE"}]}}
>
> Sam, correct me if I am wrong.
>
> I generally like this idea. I do have a few reservations with this:
>
> 1) Creating and updating a load balancer requires a full tree configuration with the current extension/plugin logic in neutron. Since updates will require a full tree, it means the user would have to know the full tree configuration just to simply update a name. Solving this would require nested child resources in the URL, which the current neutron extension/plugin does not allow. Maybe the new one will.
>
> 2) The status_tree can get quite large depending on the number of listeners and pools being used. This is a minor issue really as it will make horizon's (or any other UI tool's) job easier to show statuses.
>
> Thanks,
> Brandon
>
> On Mon, 2014-11-24 at 12:43 -0800, Stephen Balukoff wrote:
> > Hi Samuel,
> >
> >
> > We've actually been avoiding having a deeper discussion about status
> > in Neutron LBaaS since this can get pretty hairy as the back-end
> > implementations get more complicated. I suspect managing that is
> > probably one of the bigger reasons we have disagreements around object
> > sharing. Perhaps it's time we discussed representing state "correctly"
> > (whatever that means), instead of a round-a-bout discussion about
> > object sharing (which, I think, is really just avoiding this issue)?
> >
> >
> > Do you have a proposal about how status should be represented
> > (possibly including a description of the state machine) if we collapse
> > everything down to be logical objects except the loadbalancer object?
> > (From what you're proposing, I suspect it might be too general to, for
> > example, represent the UP/DOWN status of members of a given pool.)
> >
> >
> > Also, from an haproxy perspective, sharing pools within a single
> > listener actually isn't a problem. That is to say, having the same
> > L7Policy pointing at the same pool is OK, so I personally don't have a
> > problem allowing sharing of objects within the scope of parent
> > objects. What do the rest of y'all think?
> >
> >
> > Stephen
> >
> >
> >
> > On Sat, Nov 22, 2014 at 11:06 PM, Samuel Bercovici
> > <SamuelB at radware.com<mailto:SamuelB at radware.com>> wrote:
> > Hi Stephen,
> >
> >
> >
> > 1. The issue is that if we do 1:1 and allow status/state
> > to proliferate throughout all objects we will then get an
> > issue to fix it later, hence even if we do not do sharing, I
> > would still like to have all objects besides LB be treated as
> > logical.
> >
> > 2. The 3rd use case bellow will not be reasonable without
> > pool sharing between different policies. Specifying different
> > pools which are the same for each policy make it non-started
> > to me.
> >
> >
> >
> > -Sam.
> >
> >
> >
> >
> >
> >
> >
> > From: Stephen Balukoff [mailto:sbalukoff at bluebox.net<mailto:sbalukoff at bluebox.net>]
> > Sent: Friday, November 21, 2014 10:26 PM
> > To: OpenStack Development Mailing List (not for usage
> > questions)
> > Subject: Re: [openstack-dev] [neutron][lbaas] Shared Objects
> > in LBaaS - Use Cases that led us to adopt this.
> >
> >
> >
> > I think the idea was to implement 1:1 initially to reduce the
> > amount of code and operational complexity we'd have to deal
> > with in initial revisions of LBaaS v2. Many to many can be
> > simulated in this scenario, though it does shift the burden of
> > maintenance to the end user. It does greatly simplify the
> > initial code for v2, in any case, though.
> >
> >
> >
> >
> >
> > Did we ever agree to allowing listeners to be shared among
> > load balancers? I think that still might be a N:1
> > relationship even in our latest models.
> >
> >
> >
> >
> > There's also the difficulty introduced by supporting different
> > flavors: Since flavors are essentially an association between
> > a load balancer object and a driver (with parameters), once
> > flavors are introduced, any sub-objects of a given load
> > balancer objects must necessarily be purely logical until they
> > are associated with a load balancer. I know there was talk of
> > forcing these objects to be sub-objects of a load balancer
> > which can't be accessed independently of the load balancer
> > (which would have much the same effect as what you discuss:
> > State / status only make sense once logical objects have an
> > instantiation somewhere.) However, the currently proposed API
> > treats most objects as root objects, which breaks this
> > paradigm.
> >
> >
> >
> >
> >
> > How we handle status and updates once there's an instantiation
> > of these logical objects is where we start getting into real
> > complexity.
> >
> >
> >
> >
> >
> > It seems to me there's a lot of complexity introduced when we
> > allow a lot of many to many relationships without a whole lot
> > of benefit in real-world deployment scenarios. In most cases,
> > objects are not going to be shared, and in those cases with
> > sufficiently complicated deployments in which shared objects
> > could be used, the user is likely to be sophisticated enough
> > and skilled enough to manage updating what are essentially
> > "copies" of objects, and would likely have an opinion about
> > how individual failures should be handled which wouldn't
> > necessarily coincide with what we developers of the system
> > would assume. That is to say, allowing too many many to many
> > relationships feels like a solution to a problem that doesn't
> > really exist, and introduces a lot of unnecessary complexity.
> >
> >
> >
> >
> >
> > In any case, though, I feel like we should walk before we run:
> > Implementing 1:1 initially is a good idea to get us rolling.
> > Whether we then implement 1:N or M:N after that is another
> > question entirely. But in any case, it seems like a bad idea
> > to try to start with M:N.
> >
> >
> >
> >
> >
> > Stephen
> >
> >
> >
> >
> >
> >
> >
> > On Thu, Nov 20, 2014 at 4:52 AM, Samuel Bercovici
> > <SamuelB at radware.com<mailto:SamuelB at radware.com>> wrote:
> >
> > Hi,
> >
> > Per discussion I had at OpenStack Summit/Paris with Brandon
> > and Doug, I would like to remind everyone why we choose to
> > follow a model where pools and listeners are shared (many to
> > many relationships).
> >
> > Use Cases:
> > 1. The same application is being exposed via different LB
> > objects.
> > For example: users coming from the internal "private"
> > organization network, have an LB1(private_VIP) -->
> > Listener1(TLS) -->Pool1 and user coming from the "internet",
> > have LB2(public_vip)-->Listener1(TLS)-->Pool1.
> > This may also happen to support ipv4 and ipv6: LB_v4(ipv4_VIP)
> > --> Listener1(TLS) -->Pool1 and LB_v6(ipv6_VIP) -->
> > Listener1(TLS) -->Pool1
> > The operator would like to be able to manage the pool
> > membership in cases of updates and error in a single place.
> >
> > 2. The same group of servers is being used via different
> > listeners optionally also connected to different LB objects.
> > For example: users coming from the internal "private"
> > organization network, have an LB1(private_VIP) -->
> > Listener1(HTTP) -->Pool1 and user coming from the "internet",
> > have LB2(public_vip)-->Listener2(TLS)-->Pool1.
> > The LBs may use different flavors as LB2 needs TLS termination
> > and may prefer a different "stronger" flavor.
> > The operator would like to be able to manage the pool
> > membership in cases of updates and error in a single place.
> >
> > 3. The same group of servers is being used in several
> > different L7_Policies connected to a listener. Such listener
> > may be reused as in use case 1.
> > For example: LB1(VIP1)-->Listener_L7(TLS)
> > |
> >
> > +-->L7_Policy1(rules..)-->Pool1
> > |
> >
> > +-->L7_Policy2(rules..)-->Pool2
> > |
> >
> > +-->L7_Policy3(rules..)-->Pool1
> > |
> >
> > +-->L7_Policy3(rules..)-->Reject
> >
> >
> > I think that the "key" issue handling correctly the
> > "provisioning" state and the operation state in a many to many
> > model.
> > This is an issue as we have attached status fields to each and
> > every object in the model.
> > A side effect of the above is that to understand the
> > "provisioning/operation" status one needs to check many
> > different objects.
> >
> > To remedy this, I would like to turn all objects besides the
> > LB to be logical objects. This means that the only place to
> > manage the status/state will be on the LB object.
> > Such status should be hierarchical so that logical object
> > attached to an LB, would have their status consumed out of the
> > LB object itself (in case of an error).
> > We also need to discuss how modifications of a logical object
> > will be "rendered" to the concrete LB objects.
> > You may want to revisit
> > https://docs.google.com/document/d/1D-1n8nCEFurYzvEBxIRfXfffnImcIPwWSctAG-NXonY/edit#heading=h.3rvy5drl5b5r the "Logical Model + Provisioning Status + Operation Status + Statistics" for a somewhat more detailed explanation albeit it uses the LBaaS v1 model as a reference.
> >
> > Regards,
> > -Sam.
> >
> >
> >
> >
> >
> > _______________________________________________
> > OpenStack-dev mailing list
> > OpenStack-dev at lists.openstack.org<mailto:OpenStack-dev at lists.openstack.org>
> >
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> >
> >
> >
> >
> > --
> >
> > Stephen Balukoff
> > Blue Box Group, LLC
> > (800)613-4305 x807<tel:%28800%29613-4305%20x807>
> >
> >
> >
> > _______________________________________________
> > OpenStack-dev mailing list
> > OpenStack-dev at lists.openstack.org<mailto:OpenStack-dev at lists.openstack.org>
> >
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> >
> >
> > --
> > Stephen Balukoff
> > Blue Box Group, LLC
> > (800)613-4305 x807<tel:%28800%29613-4305%20x807>
> > _______________________________________________
> > OpenStack-dev mailing list
> > OpenStack-dev at lists.openstack.org<mailto:OpenStack-dev at lists.openstack.org>
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org<mailto:OpenStack-dev at lists.openstack.org>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org<mailto:OpenStack-dev at lists.openstack.org>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
_______________________________________________
OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org<mailto:OpenStack-dev at lists.openstack.org>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
--
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807<tel:%28800%29613-4305%20x807>
_______________________________________________
OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org<mailto:OpenStack-dev at lists.openstack.org>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
--
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807<tel:%28800%29613-4305%20x807>
_______________________________________________
OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org<mailto:OpenStack-dev at lists.openstack.org>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
--
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20141209/dafa4e7e/attachment-0001.html>
------------------------------
Message: 9
Date: Mon, 8 Dec 2014 23:39:38 -0800
From: W Chan <m4d.coder at gmail.com>
To: "OpenStack Development Mailing List (not for usage questions)"
<openstack-dev at lists.openstack.org>
Subject: [openstack-dev] [Mistral] Action context passed to all action
executions by default
Message-ID:
<CABNy8O72cfRSPVP0vq_p3uB2viqXo4XY7o0kEp8EA8ekti29Nw at mail.gmail.com>
Content-Type: text/plain; charset="utf-8"
Renat,
Is there any reason why Mistral do not pass action context such as workflow
ID, execution ID, task ID, and etc to all of the action executions? I
think it makes a lot of sense for that information to be made available by
default. The action can then decide what to do with the information. It
doesn't require a special signature in the __init__ method of the Action
classes. What do you think?
Thanks.
Winson
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20141208/a0238f56/attachment-0001.html>
------------------------------
Message: 10
Date: Tue, 09 Dec 2014 09:38:50 +0100
From: Thierry Carrez <thierry at openstack.org>
To: OpenStack Development Mailing List
<openstack-dev at lists.openstack.org>
Subject: [openstack-dev] Cross-Project meeting, Tue December 9th,
21:00 UTC
Message-ID: <5486B51A.6080001 at openstack.org>
Content-Type: text/plain; charset=utf-8
Dear PTLs, cross-project liaisons and anyone else interested,
We'll have a cross-project meeting Tuesday at 21:00 UTC, with the
following agenda:
* Convergence on specs process (johnthetubaguy)
* Approval process differences
* Path structure differences
* specs.o.o aspect differences (toc)
* osprofiler config options (kragniz)
* Glance uses a different name from other projects
* Consensus on what name to use
* Open discussion & announcements
See you there !
For more details, please see:
https://wiki.openstack.org/wiki/Meetings/CrossProjectMeeting
--
Thierry Carrez (ttx)
------------------------------
Message: 11
Date: Tue, 9 Dec 2014 14:48:10 +0600
From: Renat Akhmerov <rakhmerov at mirantis.com>
To: "OpenStack Development Mailing List (not for usage questions)"
<openstack-dev at lists.openstack.org>
Subject: Re: [openstack-dev] [Mistral] Query on creating multiple
resources
Message-ID: <8CC8FE04-056A-4574-926B-BAF5B7749C84 at mirantis.com>
Content-Type: text/plain; charset=utf-8
Hey,
I think it?s a question of what the final goal is. For just creating security groups as a resource I think Georgy and Zane are right, just use Heat. If the goal is to try Mistral or have this simple workflow as part of more complex then it?s totally fine to use Mistral. Sorry, I?m probably biased because Mistral is our baby :). Anyway, Nikolay has already answered the question technically, this ?for-each? feature will be available officially in about 2 weeks.
> Create VM workflow was a demo example. Mistral potentially can be used by Heat or other orchestration tools to do actual interaction with API, but for user it might be easier to use Heat functionality.
I kind of disagree with that statement. Mistral can be used by whoever finds its useful for their needs. Standard ?create_instance? workflow (which is in ?resources/workflows/create_instance.yaml?) is not so a demo example as well. It does a lot of good stuff you may really need in your case (e.g. retry policies). Even though it?s true that it has some limitations we?re aware of. For example, when it comes to configuring a network for newly created instance it?s now missing network related parameters to be able to alter behavior.
One more thing: Now only will Heat be able to call Mistral somewhere underneath the surface. Mistral has already integration with Heat to be able to call it if needed and there?s a plan to make it even more useful and usable.
Thanks
Renat Akhmerov
@ Mirantis Inc.
------------------------------
Message: 12
Date: Tue, 9 Dec 2014 14:49:32 +0600
From: Renat Akhmerov <rakhmerov at mirantis.com>
To: "OpenStack Development Mailing List (not for usage questions)"
<openstack-dev at lists.openstack.org>
Subject: Re: [openstack-dev] [Mistral] Query on creating multiple
resources
Message-ID: <33D7E423-9FDD-4D3D-9D0B-65ADA937852F at mirantis.com>
Content-Type: text/plain; charset="iso-8859-1"
No problem, let us know if you have any other questions.
Renat Akhmerov
@ Mirantis Inc.
> On 09 Dec 2014, at 11:57, Sushma Korati <sushma_korati at persistent.com> wrote:
>
>
> Hi,
>
> Thank you guys.
>
> Yes I am able to do this with heat, but I faced issues while trying the same with mistral.
> As suggested will try with the latest mistral branch. Thank you once again.
>
> Regards,
> Sushma
>
>
>
>
> From: Georgy Okrokvertskhov [mailto:gokrokvertskhov at mirantis.com <mailto:gokrokvertskhov at mirantis.com>]
> Sent: Tuesday, December 09, 2014 6:07 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Mistral] Query on creating multiple resources
>
> Hi Sushma,
>
> Did you explore Heat templates? As Zane mentioned you can do this via Heat template without writing any workflows.
> Do you have any specific use cases which you can't solve with Heat template?
>
> Create VM workflow was a demo example. Mistral potentially can be used by Heat or other orchestration tools to do actual interaction with API, but for user it might be easier to use Heat functionality.
>
> Thanks,
> Georgy
>
> On Mon, Dec 8, 2014 at 7:54 AM, Nikolay Makhotkin <nmakhotkin at mirantis.com <mailto:nmakhotkin at mirantis.com>> wrote:
> Hi, Sushma!
>
> Can we create multiple resources using a single task, like multiple keypairs or security-groups or networks etc?
>
> Yes, we can. This feature is in the development now and it is considered as experimental -https://blueprints.launchpad.net/mistral/+spec/mistral-dataflow-collections <https://blueprints.launchpad.net/mistral/+spec/mistral-dataflow-collections>
>
> Just clone the last master branch from mistral.
>
> You can specify "for-each" task property and provide the array of data to your workflow:
>
> --------------------
> version: '2.0'
>
> name: secgroup_actions
>
> workflows:
> create_security_group:
> type: direct
> input:
> - array_with_names_and_descriptions
>
> tasks:
> create_secgroups:
> for-each:
> data: $.array_with_names_and_descriptions
> action: nova.security_groups_create name={$.data.name <http://data.name/>} description={$.data.description}
> ------------
>
> On Mon, Dec 8, 2014 at 6:36 PM, Zane Bitter <zbitter at redhat.com <mailto:zbitter at redhat.com>> wrote:
> On 08/12/14 09:41, Sushma Korati wrote:
> Can we create multiple resources using a single task, like multiple
> keypairs or security-groups or networks etc?
>
> Define them in a Heat template and create the Heat stack as a single task.
>
> - ZB
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org <mailto:OpenStack-dev at lists.openstack.org>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
>
>
>
> --
> Best Regards,
> Nikolay
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org <mailto:OpenStack-dev at lists.openstack.org>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
>
>
>
> --
> Georgy Okrokvertskhov
> Architect,
> OpenStack Platform Products,
> Mirantis
> http://www.mirantis.com <http://www.mirantis.com/>
> Tel. +1 650 963 9828
> Mob. +1 650 996 3284
> DISCLAIMER ========== This e-mail may contain privileged and confidential information which is the property of Persistent Systems Ltd. It is intended only for the use of the individual or entity to which it is addressed. If you are not the intended recipient, you are not authorized to read, retain, copy, print, distribute or use this message. If you have received this communication in error, please notify the sender and delete all copies of this message. Persistent Systems Ltd. does not accept any liability for virus infected mails.
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org <mailto:OpenStack-dev at lists.openstack.org>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20141209/b16a3f9f/attachment-0001.html>
------------------------------
Message: 13
Date: Tue, 9 Dec 2014 14:52:38 +0600
From: Renat Akhmerov <rakhmerov at mirantis.com>
To: "OpenStack Development Mailing List (not for usage questions)"
<openstack-dev at lists.openstack.org>
Subject: Re: [openstack-dev] [Mistral] Event Subscription
Message-ID: <17C46BD9-F6F6-43AE-883E-2512EE698CDD at mirantis.com>
Content-Type: text/plain; charset="utf-8"
Ok, got it.
So my general suggestion here is: let's keep it as simple as possible for now, create something that works and then let?s see how to improve it. And yes, consumers may be and mostly will be 3rd parties.
Thanks
Renat Akhmerov
@ Mirantis Inc.
> On 09 Dec 2014, at 08:25, W Chan <m4d.coder at gmail.com> wrote:
>
> Renat,
>
> On sending events to an "exchange", I mean an exchange on the transport (i.e. rabbitMQ exchange https://www.rabbitmq.com/tutorials/amqp-concepts.html <https://www.rabbitmq.com/tutorials/amqp-concepts.html>). On implementation we can probably explore the notification feature in oslo.messaging. But on second thought, this would limit the consumers to trusted subsystems or services though. If we want the event consumers to be any 3rd party, including untrusted, then maybe we should keep it as HTTP calls.
>
> Winson
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20141209/5f5eb550/attachment-0001.html>
------------------------------
Message: 14
Date: Tue, 9 Dec 2014 15:22:28 +0600
From: Renat Akhmerov <rakhmerov at mirantis.com>
To: "OpenStack Development Mailing List (not for usage questions)"
<openstack-dev at lists.openstack.org>
Subject: Re: [openstack-dev] [Mistral] Action context passed to all
action executions by default
Message-ID: <74548DCF-F417-4EA8-B3AE-CDD3024A0753 at mirantis.com>
Content-Type: text/plain; charset=us-ascii
Hi Winson,
I think it makes perfect sense. The reason I think is mostly historical and this can be reviewed now. Can you pls file a BP and describe your suggested design on that? I mean how we need to alter interface Action etc.
Thanks
Renat Akhmerov
@ Mirantis Inc.
> On 09 Dec 2014, at 13:39, W Chan <m4d.coder at gmail.com> wrote:
>
> Renat,
>
> Is there any reason why Mistral do not pass action context such as workflow ID, execution ID, task ID, and etc to all of the action executions? I think it makes a lot of sense for that information to be made available by default. The action can then decide what to do with the information. It doesn't require a special signature in the __init__ method of the Action classes. What do you think?
>
> Thanks.
> Winson
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
------------------------------
Message: 15
Date: Tue, 9 Dec 2014 09:33:47 +0000
From: joehuang <joehuang at huawei.com>
To: "OpenStack Development Mailing List (not for usage questions)"
<openstack-dev at lists.openstack.org>
Subject: Re: [openstack-dev] Cross-Project meeting, Tue December 9th,
21:00 UTC
Message-ID:
<5E7A3D1BF5FD014E86E5F971CF446EFF541FB758 at szxema505-mbs.china.huawei.com>
Content-Type: text/plain; charset="us-ascii"
Hi,
If time is available, how about adding one agenda to guide the direction for cascading to move forward? Thanks in advance.
The topic is : " Need cross-program decision to run cascading as an incubated project mode or register BP separately in each involved project. CI for cascading is quite different from traditional test environment, at least 3 OpenStack instance required for cross OpenStack networking test cases. "
--------------------------------------------------------------------------------------------------------------------------------
In the 40 minutes cross-project summit session "Approaches for scaling out"[1], almost 100 peoples attended the meeting, and the conclusion is that cells can not cover the use cases and requirements which the OpenStack cascading solution[2] aim to address, the background including use cases and requirements is also described in the mail.
After the summit, we just ported the PoC[3] source code from IceHouse based to Juno based.
Now, let's move forward:
The major task is to introduce new driver/agent to existing core projects, for the core idea of cascading is to add Nova as the hypervisor backend of Nova, Cinder as the block storage backend of Cinder, Neutron as the backend of Neutron, Glance as one image location of Glance, Ceilometer as the store of Ceilometer.
a). Need cross-program decision to run cascading as an incubated project mode or register BP separately in each involved project. CI for cascading is quite different from traditional test environment, at least 3 OpenStack instance required for cross OpenStack networking test cases.
b). Volunteer as the cross project coordinator.
c). Volunteers for implementation and CI. (Already 6 engineers working on cascading in the project StackForge/tricircle)
Background of OpenStack cascading vs cells:
1. Use cases
a). Vodafone use case[4](OpenStack summit speech video from 9'02" to 12'30" ), establishing globally addressable tenants which result in efficient services deployment.
b). Telefonica use case[5], create virtual DC( data center) cross multiple physical DCs with seamless experience.
c). ETSI NFV use cases[6], especially use case #1, #2, #3, #5, #6, 8#. For NFV cloud, it's in nature the cloud will be distributed but inter-connected in many data centers.
2.requirements
a). The operator has multiple sites cloud; each site can use one or multiple vendor's OpenStack distributions.
b). Each site with its own requirements and upgrade schedule while maintaining standard OpenStack API
c). The multi-site cloud must provide unified resource management with global Open API exposed, for example create virtual DC cross multiple physical DCs with seamless experience.
Although a prosperity orchestration layer could be developed for the multi-site cloud, but it's prosperity API in the north bound interface. The cloud operators want the ecosystem friendly global open API for the mutli-site cloud for global access.
3. What problems does cascading solve that cells doesn't cover:
OpenStack cascading solution is "OpenStack orchestrate OpenStacks". The core architecture idea of OpenStack cascading is to add Nova as the hypervisor backend of Nova, Cinder as the block storage backend of Cinder, Neutron as the backend of Neutron, Glance as one image location of Glance, Ceilometer as the store of Ceilometer. Thus OpenStack is able to orchestrate OpenStacks (from different vendor's distribution, or different version ) which may located in different sites (or data centers ) through the OpenStack API, meanwhile the cloud still expose OpenStack API as the north-bound API in the cloud level.
4. Why cells can't do that:
Cells provide the scale out capability to Nova, but from the OpenStack as a whole point of view, it's still working like one OpenStack instance.
a). if Cells is deployed with shared Cinder, Neutron, Glance, Ceilometer. This approach provides the multi-site cloud with one unified API endpoint and unified resource management, but consolidation of multi-vendor/multi-version OpenStack instances across one or more data centers cannot be fulfilled.
b). Each site installed one child cell and accompanied standalone Cinder, Neutron(or Nova-network), Glance, Ceilometer. This approach makes multi-vendor/multi-version OpenStack distribution co-existence in multi-site seem to be feasible, but the requirement for unified API endpoint and unified resource management cannot be fulfilled. Cross Neutron networking automation is also missing, which should otherwise be done manually or use proprietary orchestration layer.
For more information about cascading and cells, please refer to the discussion thread before Paris Summit [7].
[1]Approaches for scaling out: https://etherpad.openstack.org/p/kilo-crossproject-scale-out-openstack
[2]OpenStack cascading solution: https://wiki.openstack.org/wiki/OpenStack_cascading_solution
[3]Cascading PoC: https://github.com/stackforge/tricircle
[4]Vodafone use case (9'02" to 12'30"): https://www.youtube.com/watch?v=-KOJYvhmxQI
[5]Telefonica use case: http://www.telefonica.com/en/descargas/mwc/present_20140224.pdf
[6]ETSI NFV use cases: http://www.etsi.org/deliver/etsi_gs/nfv/001_099/001/01.01.01_60/gs_nfv001v010101p.pdf
[7]Cascading thread before design summit: http://openstack.10931.n7.nabble.com/all-tc-Multi-clouds-integration-by-OpenStack-cascading-td54115.html
Best Regards
Chaoyi Huang ( Joe Huang )
-----Original Message-----
From: Thierry Carrez [mailto:thierry at openstack.org]
Sent: Tuesday, December 09, 2014 4:39 PM
To: OpenStack Development Mailing List
Subject: [openstack-dev] Cross-Project meeting, Tue December 9th, 21:00 UTC
Dear PTLs, cross-project liaisons and anyone else interested,
We'll have a cross-project meeting Tuesday at 21:00 UTC, with the following agenda:
* Convergence on specs process (johnthetubaguy)
* Approval process differences
* Path structure differences
* specs.o.o aspect differences (toc)
* osprofiler config options (kragniz)
* Glance uses a different name from other projects
* Consensus on what name to use
* Open discussion & announcements
See you there !
For more details, please see:
https://wiki.openstack.org/wiki/Meetings/CrossProjectMeeting
--
Thierry Carrez (ttx)
_______________________________________________
OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
------------------------------
Message: 16
Date: Tue, 9 Dec 2014 09:52:11 +0000
From: "Fox, Kevin M" <Kevin.Fox at pnnl.gov>
To: "OpenStack Development Mailing List (not for usage questions)"
<openstack-dev at lists.openstack.org>
Cc: "Luohao \(brian\)" <brian.luohao at huawei.com>
Subject: Re: [openstack-dev] [Ironic] Some questions about Ironic
service
Message-ID:
<1A3C52DFCD06494D8528644858247BF017815FE1 at EX10MBOX03.pnnl.gov>
Content-Type: text/plain; charset="windows-1252"
No to questions 1, 3, and 4. Yes to 2, but very minimally.
________________________________
From: xianchaobo
Sent: Monday, December 08, 2014 10:29:50 PM
To: openstack-dev at lists.openstack.org
Cc: Luohao (brian)
Subject: [openstack-dev] [Ironic] Some questions about Ironic service
Hello, all
I?m trying to install and configure Ironic service, something confused me.
I create two neutron networks, public network and private network.
Private network is used to deploy physical machines
Public network is used to provide floating ip.
(1) Private network type can be VLAN or VXLAN? (In install guide, the network type is flat)
(2) The network of deployed physical machines can be managed by neutron?
(3) Different tenants can have its own network to manage physical machines?
(4) Does the ironic provide some mechanism for deployed physical machines
to use storage such as shared storage,cinder volume?
Thanks,
XianChaobo
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20141209/fd95de5d/attachment-0001.html>
------------------------------
Message: 17
Date: Tue, 9 Dec 2014 10:53:19 +0100
From: Maxime Leroy <maxime.leroy at 6wind.com>
To: openstack-dev at lists.openstack.org
Subject: [openstack-dev] [Nova][Neutron] out-of-tree plugin for Mech
driver/L2 and vif_driver
Message-ID:
<CAEykdvrSnMN0-OAFC0b1MzmJ5+10AvFiigHoKc_fR6FfyADzCw at mail.gmail.com>
Content-Type: text/plain; charset=UTF-8
Hi there.
I would like some clarification regarding support of out-of-tree
plugin in nova and in neutron.
First, on neutron side, there are mechanisms to support out-of-tree
plugin for L2 plugin (core_plugin) and ML2 mech driver
(stevedore/entrypoint).
Most of ML2/L2 plugins need to have a specific vif driver. As the
vif_driver configuration option in nova has been removed of Juno, it's
not possible to have anymore external Mech driver/L2 plugin.
The nova community takes the decision to stop supporting VIF driver
classes as a public extension point. (ref
http://lists.openstack.org/pipermail/openstack-dev/2014-August/043174.html)
At the opposite, the neutron community still continues to support
external L2/ML2 mechdriver plugin. And more, the decision to put
out-of-the-tree the monolithic plugins and ML2 Mechanism Drivers has
been taken in the Paris summit (ref
https://review.openstack.org/#/c/134680/15/specs/kilo/core-vendor-decomposition.rst)
I am feeling a bit confused about these two opposite decisions of the
two communities. What am I missing ?
I have also proposed a blueprint to have a new plugin mechanism in
nova to load external vif driver. (nova-specs:
https://review.openstack.org/#/c/136827/ and nova (rfc patch):
https://review.openstack.org/#/c/136857/)
>From my point-of-view of a developer having a plugin framework for
internal/external vif driver seems to be a good thing.
It makes the code more modular and introduce a clear api for vif driver classes.
So far, it raises legitimate questions concerning API stability and
public API that request a wider discussion on the ML (as asking by
John Garbut).
I think having a plugin mechanism and a clear api for vif driver is
not going against this policy:
http://docs.openstack.org/developer/nova/devref/policies.html#out-of-tree-support.
There is no needs to have a stable API. It is up to the owner of the
external VIF driver to ensure that its driver is supported by the
latest API. And not the nova community to manage a stable API for this
external VIF driver. Does it make senses ?
Considering the network V2 API, L2/ML2 mechanism driver and VIF driver
need to exchange information such as: binding:vif_type and
binding:vif_details.
>From my understanding, 'binding:vif_type' and 'binding:vif_details' as
a field part of the public network api. There is no validation
constraints for these fields (see
http://docs.openstack.org/api/openstack-network/2.0/content/binding_ext_ports.html),
it means that any value is accepted by the API. So, the values set in
'binding:vif_type' and 'binding:vif_details' are not part of the
public API. Is my understanding correct ?
What other reasons am I missing to not have VIF driver classes as a
public extension point ?
Thanks in advance for your help.
Maxime
------------------------------
Message: 18
Date: Tue, 9 Dec 2014 09:54:21 +0000
From: "Fox, Kevin M" <Kevin.Fox at pnnl.gov>
To: "OpenStack Development Mailing List (not for usage questions)"
<openstack-dev at lists.openstack.org>
Subject: Re: [openstack-dev] [Ironic] How to get past pxelinux.0
bootloader?
Message-ID:
<1A3C52DFCD06494D8528644858247BF017815FEF at EX10MBOX03.pnnl.gov>
Content-Type: text/plain; charset="us-ascii"
You probably want to use the agent driver, not the pxe one. It lets you use bootloaders from the image.
________________________________
From: Peeyush Gupta
Sent: Monday, December 08, 2014 10:55:39 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Ironic] How to get past pxelinux.0 bootloader?
Hi all,
So, I have set up a devstack ironic setup for baremetal deployment. I
have been able to deploy a baremetal node successfully using
pxe_ipmitool driver. Now, I am trying to boot a server where I already
have a bootloader i.e. I don't need pxelinux to go and fetch kernel and
initrd images for me. I want to transfer them directly.
I checked out the code and figured out that there are dhcp opts
available, that are modified using pxe_utils.py, changing it didn't
help. Then I moved to ironic.conf, but here also I only see an option to
add pxe_bootfile_name, which is exactly what I want to avoid. Can anyone
please help me with this situation? I don't want to go through
pxelinux.0 bootloader, I just directly want to transfer kernel and
initrd images.
Thanks.
--
Peeyush Gupta
gpeeyush at linux.vnet.ibm.com
_______________________________________________
OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20141209/b1cec6a5/attachment-0001.html>
------------------------------
Message: 19
Date: Tue, 9 Dec 2014 11:09:02 +0100
From: Roman Prykhodchenko <rprikhodchenko at mirantis.com>
To: "OpenStack Development Mailing List (not for usage questions)"
<openstack-dev at lists.openstack.org>
Subject: Re: [openstack-dev] [Ironic] Fuel agent proposal
Message-ID: <2FD74259-FECF-4B5E-99FE-6A3EB6976582 at mirantis.com>
Content-Type: text/plain; charset="utf-8"
It is true that IPA and FuelAgent share a lot of functionality in common. However there is a major difference between them which is that they are intended to be used to solve a different problem.
IPA is a solution for provision-use-destroy-use_by_different_user use-case and is really great for using it for providing BM nodes for other OS services or in services like Rackspace OnMetal. FuelAgent itself serves for provision-use-use-?-use use-case like Fuel or TripleO have.
Those two use-cases require concentration on different details in first place. For instance for IPA proper decommissioning is more important than advanced disk management, but for FuelAgent priorities are opposite because of obvious reasons.
Putting all functionality to a single driver and a single agent may cause conflicts in priorities and make a lot of mess inside both the driver and the agent. Actually previously changes to IPA were blocked right because of this conflict of priorities. Therefore replacing FuelAgent by IPA in where FuelAgent is used currently does not seem like a good option because come people (and I?m not talking about Mirantis) might loose required features because of different priorities.
Having two separate drivers along with two separate agents for those different use-cases will allow to have two independent teams that are concentrated on what?s really important for a specific use-case. I don?t see any problem in overlapping functionality if it?s used differently.
P. S.
I realise that people may be also confused by the fact that FuelAgent is actually called like that and is used only in Fuel atm. Our point is to make it a simple, powerful and what?s more important a generic tool for provisioning. It is not bound to Fuel or Mirantis and if it will cause confusion in the future we will even be happy to give it a different and less confusing name.
P. P. S.
Some of the points of this integration do not look generic enough or nice enough. We look pragmatic on the stuff and are trying to implement what?s possible to implement as the first step. For sure this is going to have a lot more steps to make it better and more generic.
> On 09 Dec 2014, at 01:46, Jim Rollenhagen <jim at jimrollenhagen.com> wrote:
>
>
>
> On December 8, 2014 2:23:58 PM PST, Devananda van der Veen <devananda.vdv at gmail.com <mailto:devananda.vdv at gmail.com>> wrote:
>> I'd like to raise this topic for a wider discussion outside of the
>> hallway
>> track and code reviews, where it has thus far mostly remained.
>>
>> In previous discussions, my understanding has been that the Fuel team
>> sought to use Ironic to manage "pets" rather than "cattle" - and doing
>> so
>> required extending the API and the project's functionality in ways that
>> no
>> one else on the core team agreed with. Perhaps that understanding was
>> wrong
>> (or perhaps not), but in any case, there is now a proposal to add a
>> FuelAgent driver to Ironic. The proposal claims this would meet that
>> teams'
>> needs without requiring changes to the core of Ironic.
>>
>> https://review.openstack.org/#/c/138115/
>
> I think it's clear from the review that I share the opinions expressed in this email.
>
> That said (and hopefully without derailing the thread too much), I'm curious how this driver could do software RAID or LVM without modifying Ironic's API or data model. How would the agent know how these should be built? How would an operator or user tell Ironic what the disk/partition/volume layout would look like?
>
> And before it's said - no, I don't think vendor passthru API calls are an appropriate answer here.
>
> // jim
>
>>
>> The Problem Description section calls out four things, which have all
>> been
>> discussed previously (some are here [0]). I would like to address each
>> one,
>> invite discussion on whether or not these are, in fact, problems facing
>> Ironic (not whether they are problems for someone, somewhere), and then
>> ask
>> why these necessitate a new driver be added to the project.
>>
>>
>> They are, for reference:
>>
>> 1. limited partition support
>>
>> 2. no software RAID support
>>
>> 3. no LVM support
>>
>> 4. no support for hardware that lacks a BMC
>>
>> #1.
>>
>> When deploying a partition image (eg, QCOW format), Ironic's PXE deploy
>> driver performs only the minimal partitioning necessary to fulfill its
>> mission as an OpenStack service: respect the user's request for root,
>> swap,
>> and ephemeral partition sizes. When deploying a whole-disk image,
>> Ironic
>> does not perform any partitioning -- such is left up to the operator
>> who
>> created the disk image.
>>
>> Support for arbitrarily complex partition layouts is not required by,
>> nor
>> does it facilitate, the goal of provisioning physical servers via a
>> common
>> cloud API. Additionally, as with #3 below, nothing prevents a user from
>> creating more partitions in unallocated disk space once they have
>> access to
>> their instance. Therefor, I don't see how Ironic's minimal support for
>> partitioning is a problem for the project.
>>
>> #2.
>>
>> There is no support for defining a RAID in Ironic today, at all,
>> whether
>> software or hardware. Several proposals were floated last cycle; one is
>> under review right now for DRAC support [1], and there are multiple
>> call
>> outs for RAID building in the state machine mega-spec [2]. Any such
>> support
>> for hardware RAID will necessarily be abstract enough to support
>> multiple
>> hardware vendor's driver implementations and both in-band creation (via
>> IPA) and out-of-band creation (via vendor tools).
>>
>> Given the above, it may become possible to add software RAID support to
>> IPA
>> in the future, under the same abstraction. This would closely tie the
>> deploy agent to the images it deploys (the latter image's kernel would
>> be
>> dependent upon a software RAID built by the former), but this would
>> necessarily be true for the proposed FuelAgent as well.
>>
>> I don't see this as a compelling reason to add a new driver to the
>> project.
>> Instead, we should (plan to) add support for software RAID to the
>> deploy
>> agent which is already part of the project.
>>
>> #3.
>>
>> LVM volumes can easily be added by a user (after provisioning) within
>> unallocated disk space for non-root partitions. I have not yet seen a
>> compelling argument for doing this within the provisioning phase.
>>
>> #4.
>>
>> There are already in-tree drivers [3] [4] [5] which do not require a
>> BMC.
>> One of these uses SSH to connect and run pre-determined commands. Like
>> the
>> spec proposal, which states at line 122, "Control via SSH access
>> feature
>> intended only for experiments in non-production environment," the
>> current
>> SSHPowerDriver is only meant for testing environments. We could
>> probably
>> extend this driver to do what the FuelAgent spec proposes, as far as
>> remote
>> power control for cheap always-on hardware in testing environments with
>> a
>> pre-shared key.
>>
>> (And if anyone wonders about a use case for Ironic without external
>> power
>> control ... I can only think of one situation where I would rationally
>> ever
>> want to have a control-plane agent running inside a user-instance: I am
>> both the operator and the only user of the cloud.)
>>
>>
>> ----------------
>>
>> In summary, as far as I can tell, all of the problem statements upon
>> which
>> the FuelAgent proposal are based are solvable through incremental
>> changes
>> in existing drivers, or out of scope for the project entirely. As
>> another
>> software-based deploy agent, FuelAgent would duplicate the majority of
>> the
>> functionality which ironic-python-agent has today.
>>
>> Ironic's driver ecosystem benefits from a diversity of
>> hardware-enablement
>> drivers. Today, we have two divergent software deployment drivers which
>> approach image deployment differently: "agent" drivers use a local
>> agent to
>> prepare a system and download the image; "pxe" drivers use a remote
>> agent
>> and copy the image over iSCSI. I don't understand how a second driver
>> which
>> duplicates the functionality we already have, and shares the same goals
>> as
>> the drivers we already have, is beneficial to the project.
>>
>> Doing the same thing twice just increases the burden on the team; we're
>> all
>> working on the same problems, so let's do it together.
>>
>> -Devananda
>>
>>
>> [0]
>> https://blueprints.launchpad.net/ironic/+spec/ironic-python-agent-partition
>>
>> [1] https://review.openstack.org/#/c/107981/
>>
>> [2]
>> https://review.openstack.org/#/c/133828/11/specs/kilo/new-ironic-state-machine.rst
>>
>>
>> [3]
>> http://git.openstack.org/cgit/openstack/ironic/tree/ironic/drivers/modules/snmp.py
>>
>> [4]
>> http://git.openstack.org/cgit/openstack/ironic/tree/ironic/drivers/modules/iboot.py
>>
>> [5]
>> http://git.openstack.org/cgit/openstack/ironic/tree/ironic/drivers/modules/ssh.py
>>
>>
>> ------------------------------------------------------------------------
>>
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org <mailto:OpenStack-dev at lists.openstack.org>
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org <mailto:OpenStack-dev at lists.openstack.org>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20141209/228c5d08/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 801 bytes
Desc: Message signed with OpenPGP using GPGMail
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20141209/228c5d08/attachment-0001.pgp>
------------------------------
Message: 20
Date: Tue, 09 Dec 2014 15:51:39 +0530
From: Peeyush Gupta <gpeeyush at linux.vnet.ibm.com>
To: "OpenStack Development Mailing List (not for usage questions)"
<openstack-dev at lists.openstack.org>
Subject: Re: [openstack-dev] [Ironic] How to get past pxelinux.0
bootloader?
Message-ID: <5486CD33.4000602 at linux.vnet.ibm.com>
Content-Type: text/plain; charset="iso-8859-1"
So, basically if I am using pxe driver, I would "have to" provide
pxelinux.0?
On 12/09/2014 03:24 PM, Fox, Kevin M wrote:
> You probably want to use the agent driver, not the pxe one. It lets you use bootloaders from the image.
>
> ________________________________
> From: Peeyush Gupta
> Sent: Monday, December 08, 2014 10:55:39 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: [openstack-dev] [Ironic] How to get past pxelinux.0 bootloader?
>
> Hi all,
>
> So, I have set up a devstack ironic setup for baremetal deployment. I
> have been able to deploy a baremetal node successfully using
> pxe_ipmitool driver. Now, I am trying to boot a server where I already
> have a bootloader i.e. I don't need pxelinux to go and fetch kernel and
> initrd images for me. I want to transfer them directly.
>
> I checked out the code and figured out that there are dhcp opts
> available, that are modified using pxe_utils.py, changing it didn't
> help. Then I moved to ironic.conf, but here also I only see an option to
> add pxe_bootfile_name, which is exactly what I want to avoid. Can anyone
> please help me with this situation? I don't want to go through
> pxelinux.0 bootloader, I just directly want to transfer kernel and
> initrd images.
>
> Thanks.
>
> --
> Peeyush Gupta
> gpeeyush at linux.vnet.ibm.com
>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
--
Peeyush Gupta
gpeeyush at linux.vnet.ibm.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20141209/6d9a8ba6/attachment-0001.html>
------------------------------
Message: 21
Date: Tue, 09 Dec 2014 11:32:26 +0100
From: Thierry Carrez <thierry at openstack.org>
To: openstack-dev at lists.openstack.org
Subject: Re: [openstack-dev] Cross-Project meeting, Tue December 9th,
21:00 UTC
Message-ID: <5486CFBA.8030204 at openstack.org>
Content-Type: text/plain; charset=windows-1252
joehuang wrote:
> If time is available, how about adding one agenda to guide the direction for cascading to move forward? Thanks in advance.
>
> The topic is : " Need cross-program decision to run cascading as an incubated project mode or register BP separately in each involved project. CI for cascading is quite different from traditional test environment, at least 3 OpenStack instance required for cross OpenStack networking test cases. "
Hi Joe, we close the agenda one day before the meeting to let people
arrange their attendance based on the published agenda.
I added your topic in the backlog for next week agenda:
https://wiki.openstack.org/wiki/Meetings/CrossProjectMeeting
Regards,
--
Thierry Carrez (ttx)
------------------------------
Message: 22
Date: Tue, 9 Dec 2014 11:33:01 +0100
From: Miguel ?ngel Ajo <majopela at redhat.com>
To: OpenStack Development Mailing List
<openstack-dev at lists.openstack.org>
Subject: [openstack-dev] [neutron] mid-cycle "hot reviews"
Message-ID: <7A64F4A9F9054721A45DB25C9E5A181B at redhat.com>
Content-Type: text/plain; charset="utf-8"
Hi all!
It would be great if you could use this thread to post hot reviews on stuff
that it?s being worked out during the mid-cycle, where others from different
timezones could participate.
I know posting reviews to the list is not permitted, but I think an exception
in this case would be beneficial.
Best regards,
Miguel ?ngel Ajo
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20141209/b716eff4/attachment-0001.html>
------------------------------
Message: 23
Date: Tue, 9 Dec 2014 12:36:08 +0200
From: Tihomir Trifonov <t.trifonov at gmail.com>
To: "OpenStack Development Mailing List (not for usage questions)"
<openstack-dev at lists.openstack.org>
Subject: Re: [openstack-dev] [horizon] REST and Django
Message-ID:
<CAH=QEeCX3+kVV7WrQhYYsapFhm=WJNm4Mwkq7k5MDvcr0g+JPg at mail.gmail.com>
Content-Type: text/plain; charset="utf-8"
Sorry for the late reply, just few thoughts on the matter.
IMO the REST middleware should be as thin as possible. And I mean thin in
terms of processing - it should not do pre/post processing of the requests,
but just unpack/pack. So here is an example:
instead of making AJAX calls that contain instructions:
??
> POST --json --data {"action": "delete", "data": [ {"name":
> "item1"}, {"name": "item2"}, {"name": "item3" ]}
I think a better approach is just to pack/unpack batch commands, and leave
execution to the frontend/backend and not middleware:
??
> POST --json --data {"
> ?batch
> ":
> ?[
> {?
> "
> ?
> action"
> ? : "delete"?
> ,
> ?"payload": ?
> {"name": "item1"}
> ?,
> {?
> "
> ?
> action"
> ? : "delete"?
> ,
> ?
> "payload":
> ?
> {"name": "item
> ?2
> "}
> ?,
> {?
> "
> ?
> action"
> ? : "delete"?
> ,
> ?
> "payload":
> ?
> {"name": "item
> ?3
> "}
> ? ] ?
> ?
> ?
> }
?The idea is that the middleware should not know the actual data. It should
ideally just unpack the data:
??responses = []
>
for cmd in
> ? ?
> ?
> ?
> request.POST['batch']:?
> ?
> ??responses
> ?.append(?
> ?
> getattr(controller, cmd['action']
> ?)(**
> cmd['?payload']
> ?)?)
>
> ?return responses?
>
?and the frontend(JS) will just send batches of simple commands, and will
receive a list of responses for each command in the batch. The error
handling will be done in the frontend?(JS) as well.
?
For the more complex example of 'put()' where we have dependent objects:
project = api.keystone.tenant_get(request, id)
> kwargs = self._tenant_kwargs_from_DATA(request.DATA, enabled=None)
> api.keystone.tenant_update(request, project, **kwargs)
In practice the project data should be already present in the
frontend(assuming that we already loaded it to render the project
form/view), so
?
?
POST --json --data {"
?batch
":
?[
{?
"
?
action"
? : "tenant_update"?
,
?"payload": ?
{"project": js_project_object.id, "name": "some name", "prop1": "some
prop", "prop2": "other prop, etc."}
?
? ] ?
?
?
}?
So in general we don't need to recreate the full state on each REST call,
if we make the Frontent full-featured application. This way - the frontend
will construct the object, will hold the cached value, and will just send
the needed requests as single ones or in batches, will receive the response
from the API backend, and will render the results. The whole processing
logic will be held in the Frontend(JS), while the middleware will just act
as proxy(un/packer). This way we will maintain just the logic in the
frontend, and will not need to duplicate some logic in the middleware.
On Tue, Dec 2, 2014 at 4:45 PM, Adam Young <ayoung at redhat.com> wrote:
> On 12/02/2014 12:39 AM, Richard Jones wrote:
>
> On Mon Dec 01 2014 at 4:18:42 PM Thai Q Tran <tqtran at us.ibm.com> wrote:
>
>> I agree that keeping the API layer thin would be ideal. I should add
>> that having discrete API calls would allow dynamic population of table.
>> However, I will make a case where it *might* be necessary to add
>> additional APIs. Consider that you want to delete 3 items in a given table.
>>
>> If you do this on the client side, you would need to perform: n * (1 API
>> request + 1 AJAX request)
>> If you have some logic on the server side that batch delete actions: n *
>> (1 API request) + 1 AJAX request
>>
>> Consider the following:
>> n = 1, client = 2 trips, server = 2 trips
>> n = 3, client = 6 trips, server = 4 trips
>> n = 10, client = 20 trips, server = 11 trips
>> n = 100, client = 200 trips, server 101 trips
>>
>> As you can see, this does not scale very well.... something to consider...
>>
> This is not something Horizon can fix. Horizon can make matters worse,
> but cannot make things better.
>
> If you want to delete 3 users, Horizon still needs to make 3 distinct
> calls to Keystone.
>
> To fix this, we need either batch calls or a standard way to do multiples
> of the same operation.
>
> The unified API effort it the right place to drive this.
>
>
>
>
>
>
>
> Yep, though in the above cases the client is still going to be hanging,
> waiting for those server-backend calls, with no feedback until it's all
> done. I would hope that the client-server call overhead is minimal, but I
> guess that's probably wishful thinking when in the land of random Internet
> users hitting some provider's Horizon :)
>
> So yeah, having mulled it over myself I agree that it's useful to have
> batch operations implemented in the POST handler, the most common operation
> being DELETE.
>
> Maybe one day we could transition to a batch call with user feedback
> using a websocket connection.
>
>
> Richard
>
>> [image: Inactive hide details for Richard Jones ---11/27/2014 05:38:53
>> PM---On Fri Nov 28 2014 at 5:58:00 AM Tripp, Travis S <travis.tr]Richard
>> Jones ---11/27/2014 05:38:53 PM---On Fri Nov 28 2014 at 5:58:00 AM Tripp,
>> Travis S <travis.tripp at hp.com> wrote:
>>
>> From: Richard Jones <r1chardj0n3s at gmail.com>
>> To: "Tripp, Travis S" <travis.tripp at hp.com>, OpenStack List <
>> openstack-dev at lists.openstack.org>
>> Date: 11/27/2014 05:38 PM
>> Subject: Re: [openstack-dev] [horizon] REST and Django
>> ------------------------------
>>
>>
>>
>>
>> On Fri Nov 28 2014 at 5:58:00 AM Tripp, Travis S <*travis.tripp at hp.com*
>> <travis.tripp at hp.com>> wrote:
>>
>> Hi Richard,
>>
>> You are right, we should put this out on the main ML, so copying
>> thread out to there. ML: FYI that this started after some impromptu IRC
>> discussions about a specific patch led into an impromptu google hangout
>> discussion with all the people on the thread below.
>>
>>
>> Thanks Travis!
>>
>>
>>
>> As I mentioned in the review[1], Thai and I were mainly discussing
>> the possible performance implications of network hops from client to
>> horizon server and whether or not any aggregation should occur server side.
>> In other words, some views require several APIs to be queried before any
>> data can displayed and it would eliminate some extra network requests from
>> client to server if some of the data was first collected on the server side
>> across service APIs. For example, the launch instance wizard will need to
>> collect data from quite a few APIs before even the first step is displayed
>> (I?ve listed those out in the blueprint [2]).
>>
>> The flip side to that (as you also pointed out) is that if we keep
>> the API?s fine grained then the wizard will be able to optimize in one
>> place the calls for data as it is needed. For example, the first step may
>> only need half of the API calls. It also could lead to perceived
>> performance increases just due to the wizard making a call for different
>> data independently and displaying it as soon as it can.
>>
>>
>> Indeed, looking at the current launch wizard code it seems like you
>> wouldn't need to load all that data for the wizard to be displayed, since
>> only some subset of it would be necessary to display any given panel of the
>> wizard.
>>
>>
>>
>> I tend to lean towards your POV and starting with discrete API calls
>> and letting the client optimize calls. If there are performance problems
>> or other reasons then doing data aggregation on the server side could be
>> considered at that point.
>>
>>
>> I'm glad to hear it. I'm a fan of optimising when necessary, and not
>> beforehand :)
>>
>>
>>
>> Of course if anybody is able to do some performance testing between
>> the two approaches then that could affect the direction taken.
>>
>>
>> I would certainly like to see us take some measurements when performance
>> issues pop up. Optimising without solid metrics is bad idea :)
>>
>>
>> Richard
>>
>>
>>
>> [1]
>> *https://review.openstack.org/#/c/136676/8/openstack_dashboard/api/rest/urls.py*
>> <https://review.openstack.org/#/c/136676/8/openstack_dashboard/api/rest/urls.py>
>> [2]
>> *https://blueprints.launchpad.net/horizon/+spec/launch-instance-redesign*
>> <https://blueprints.launchpad.net/horizon/+spec/launch-instance-redesign>
>>
>> -Travis
>>
>> *From: *Richard Jones <*r1chardj0n3s at gmail.com*
>> <r1chardj0n3s at gmail.com>>
>> * Date: *Wednesday, November 26, 2014 at 11:55 PM
>> * To: *Travis Tripp <*travis.tripp at hp.com* <travis.tripp at hp.com>>, Thai
>> Q Tran/Silicon Valley/IBM <*tqtran at us.ibm.com* <tqtran at us.ibm.com>>,
>> David Lyle <*dklyle0 at gmail.com* <dklyle0 at gmail.com>>, Maxime Vidori <
>> *maxime.vidori at enovance.com* <maxime.vidori at enovance.com>>,
>> "Wroblewski, Szymon" <*szymon.wroblewski at intel.com*
>> <szymon.wroblewski at intel.com>>, "Wood, Matthew David (HP Cloud -
>> Horizon)" <*matt.wood at hp.com* <matt.wood at hp.com>>, "Chen, Shaoquan" <
>> *sean.chen2 at hp.com* <sean.chen2 at hp.com>>, "Farina, Matt (HP Cloud)" <
>> *matthew.farina at hp.com* <matthew.farina at hp.com>>, Cindy Lu/Silicon
>> Valley/IBM <*clu at us.ibm.com* <clu at us.ibm.com>>, Justin
>> Pomeroy/Rochester/IBM <*jpomero at us.ibm.com* <jpomero at us.ibm.com>>,
>> Neill Cox <*neill.cox at ingenious.com.au* <neill.cox at ingenious.com.au>>
>> * Subject: *Re: REST and Django
>>
>> I'm not sure whether this is the appropriate place to discuss this,
>> or whether I should be posting to the list under [Horizon] but I think we
>> need to have a clear idea of what goes in the REST API and what goes in the
>> client (angular) code.
>>
>> In my mind, the thinner the REST API the better. Indeed if we can get
>> away with proxying requests through without touching any *client code, that
>> would be great.
>>
>> Coding additional logic into the REST API means that a developer
>> would need to look in two places, instead of one, to determine what was
>> happening for a particular call. If we keep it thin then the API presented
>> to the client developer is very, very similar to the API presented by the
>> services. Minimum surprise.
>>
>> Your thoughts?
>>
>>
>> Richard
>>
>>
>> On Wed Nov 26 2014 at 2:40:52 PM Richard Jones <
>> *r1chardj0n3s at gmail.com* <r1chardj0n3s at gmail.com>> wrote:
>>
>>
>> Thanks for the great summary, Travis.
>>
>> I've completed the work I pledged this morning, so now the REST
>> API change set has:
>>
>> - no rest framework dependency
>> - AJAX scaffolding in openstack_dashboard.api.rest.utils
>> - code in openstack_dashboard/api/rest/
>> - renamed the API from "identity" to "keystone" to be consistent
>> - added a sample of testing, mostly for my own sanity to check
>> things were working
>>
>> *https://review.openstack.org/#/c/136676*
>> <https://review.openstack.org/#/c/136676>
>>
>>
>> Richard
>>
>> On Wed Nov 26 2014 at 12:18:25 PM Tripp, Travis S <
>> *travis.tripp at hp.com* <travis.tripp at hp.com>> wrote:
>>
>>
>> Hello all,
>>
>> Great discussion on the REST urls today! I think that we are on
>> track to come to a common REST API usage pattern. To provide quick summary:
>>
>> We all agreed that going to a straight REST pattern rather than
>> through tables was a good idea. We discussed using direct get / post in
>> Django views like what Max originally used[1][2] and Thai also started[3]
>> with the identity table rework or to go with djangorestframework [5] like
>> what Richard was prototyping with[4].
>>
>> The main things we would use from Django Rest Framework were
>> built in JSON serialization (avoid boilerplate), better exception handling,
>> and some request wrapping. However, we all weren?t sure about the need for
>> a full new framework just for that. At the end of the conversation, we
>> decided that it was a cleaner approach, but Richard would see if he could
>> provide some utility code to do that much for us without requiring the full
>> framework. David voiced that he doesn?t want us building out a whole
>> framework on our own either.
>>
>> So, Richard will do some investigation during his day today and
>> get back to us. Whatever the case, we?ll get a patch in horizon for the
>> base dependency (framework or Richard?s utilities) that both Thai?s work
>> and the launch instance work is dependent upon. We?ll build REST style
>> API?s using the same pattern. We will likely put the rest api?s in
>> horizon/openstack_dashboard/api/rest/.
>>
>> [1]
>> *https://review.openstack.org/#/c/133178/1/openstack_dashboard/workflow/keypair.py*
>> <https://review.openstack.org/#/c/133178/1/openstack_dashboard/workflow/keypair.py>
>> [2]
>> *https://review.openstack.org/#/c/133178/1/openstack_dashboard/workflow/launch.py*
>> <https://review.openstack.org/#/c/133178/1/openstack_dashboard/workflow/launch.py>
>> [3]
>> *https://review.openstack.org/#/c/133767/8/openstack_dashboard/dashboards/identity/users/views.py*
>> <https://review.openstack.org/#/c/133767/8/openstack_dashboard/dashboards/identity/users/views.py>
>> [4]
>> *https://review.openstack.org/#/c/136676/4/openstack_dashboard/rest_api/identity.py*
>> <https://review.openstack.org/#/c/136676/4/openstack_dashboard/rest_api/identity.py>
>> [5] *http://www.django-rest-framework.org/*
>> <http://www.django-rest-framework.org/>
>>
>> Thanks,
>>
>>
>> Travis_______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>>
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> _______________________________________________
> OpenStack-dev mailing listOpenStack-dev at lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
--
Regards,
Tihomir Trifonov
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20141209/953fe3e7/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: image/gif
Size: 105 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20141209/953fe3e7/attachment.gif>
------------------------------
_______________________________________________
OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
End of OpenStack-dev Digest, Vol 32, Issue 25
*********************************************
More information about the OpenStack-dev
mailing list