<div dir="ltr">rage-quit</div><div class="gmail_extra"><br><div class="gmail_quote">On Fri, Aug 4, 2017 at 11:52 AM, Monty Taylor <span dir="ltr"><<a href="mailto:mordred@inaugust.com" target="_blank">mordred@inaugust.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="HOEnZb"><div class="h5">On 08/04/2017 03:24 AM, Thierry Carrez wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Michael Johnson wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
I was wondering what is the current status of the python-openstacksdk<br>
project. The Octavia team has posted some patches implementing our new<br>
Octavia v2 API [1] in the SDK, but we have not had any reviews. I have also<br>
asked some questions in #openstack-sdks with no responses.<br>
I see that there are some maintenance patches getting merged and a pypi<br>
release was made 6/14/17 (though not through releases project). I'm not<br>
seeing any mailing list traffic and the IRC meetings seem to have ended in<br>
2016.<br>
<br>
With all the recent contributor changes, I want to make sure the project<br>
isn't adrift in the sea of OpenStack before we continue to spend development<br>
time implementing the SDK for Octavia. We were also planning to use it as<br>
the backing for our dashboard project.<br>
<br>
Since it's not in the governance projects list I couldn't determine who the<br>
PTL to ping would be, so I decided to ping the dev mailing list.<br>
<br>
My questions:<br>
1. Is this project abandoned?<br>
2. Is there a plan to make it an official project?<br>
3. Should we continue to develop for it?<br>
</blockquote>
<br>
Thanks for raising this.<br>
<br>
Beyond its limited activity, another issue is that it's not an official<br>
project while its name make it a "default choice" for a lot of users<br>
(hard to blame them for thinking that<br>
<a href="http://git.openstack.org/cgit/openstack/python-openstacksdk" rel="noreferrer" target="_blank">http://git.openstack.org/cgit/<wbr>openstack/python-openstacksdk</a> is not the<br>
official Python SDK for OpenStack, but I digress). So I agree that the<br>
situation should be clarified.<br>
<br>
I know that Monty has pretty strong feelings about this too, so I'll<br>
wait for him to comment.<br>
</blockquote>
<br></div></div>
Oh boy. I'd kind of hoped we'd make it to the PTG before starting this conversation ... guess not. :)<br>
<br>
Concerns<br>
--------<br>
<br>
I share the same concerns Thierry listed above. Specifically:<br>
<br>
* It is not an official project, but its name leads people to believe it's the "right" thing to use if they want to talk to OpenStack clouds using Python.<br>
<br>
* The core team is small to begin with, but recently got hit in a major way by shifts in company priorities.<br>
<br>
I think we can all agree that those are concerns.<br>
<br>
Three additional points:<br>
<br>
* The OpenStack AppDev group and the various appdev hackathons use shade, not openstacksdk. This is what we have people out in the world recommending people use when they write code that consumes OpenStack APIs. The Interop challenges at the Summits so far have all used Ansible's OpenStack modules, which are based on shade, because they were the thing that works.<br>
<br>
* Both shade and python-openstackclient have investigated using openstacksdk as their REST layer but were unable to because it puts some abstractions in layers that make it impossible to do some of the advanced things we need.<br>
<br>
* openstacksdk has internal implementations of things that exist at different points in the stack. We just added full support for version service and version discovery to keystoneauth, but openstacksdk has its own layer for that so it both can't use the ksa implementation and is not compliant with the API-WG consume guidelines.<br>
<br>
It's not all bad! There is some **great** work in openstacksdk and it's a shame there are some things that make it hard to consume. Brian, Qiming and Terry have done a bunch of excellent work - and I'd like to not lose it to the dustbin of corporate shifting interest.<br>
<br>
**warning** - there is a very large text wall that follows. If you don't care a ton on this topic, please stop reading now, otherwise you might rage-quit computers altogether.<br>
<br>
Proposal<br>
--------<br>
<br>
I'd propose we have the shade team adopt the python-openstacksdk codebase.<br>
<br>
This is obviously an aggressive suggestion and essentially represents a takeover of a project. We don't have the luxury of humans to work on things that we once had, so I think as a community we should be realistic about the benefits of consolidation and the downside to continuing to have 2 different python SDKs.<br>
<br>
Doing that implies the following:<br>
<br>
* Rework the underlying guts of openstacksdk to make it possible to replace shade's REST layer with openstacksdk. openstacksdk still doesn't have a 1.0 release, so we can break the few things we'll need to break.<br>
<br>
* Update the shade mission to indicate its purpose in life isn't just hiding deployer differences but rather is to provide a holistic cloud-centric (rather than service-centric) end-user API library.<br>
<br>
* Merge the two repos and retire one of them. Specifics on the mechanics of this below, but this will either result in moving the resource and service layer in openstacksdk into shade and adding appropriate attributes to the shade.OpenStackCloud object, or moving the shade.OpenStackCloud into something like openstack.cloud and making a shade backwards-compate shim. I lean towards the first, as we've been telling devs "use shade to talk to OpenStack" at hackathons and bootcamps and I'd rather avoid the messaging shift. However, pointing to an SDK called "The Python OpenStack SDK" and telling people to use it certainly has its benefits from a messaging perspective.<br>
<br>
* Collapse the core teams - members of the python-openstacksdk-core team who desire to stick around (I see Qiming doing reviews still, and Brian has been doing occasional ones even after his day-job shift) are welcome to be added to the shade-core team, but should not feel compelled to or like they'd be letting anyone down if they didn't. Day job priorities shift, it turns out.<br>
<br>
Reworking the Guts<br>
------------------<br>
<br>
I did a scan through openstacksdk the other day to catalog what would need to be reworked. The following are the big-ticket items:<br>
<br>
* drop stevedore/plugin support. An OpenStack REST client has no need for plugins. All services are welcome. *note below*<br>
<br>
* drop keystoneauth.Session subclass. It's over-riding things at the wrong layer. keystoneauth Adapter is the thing it wants to be.<br>
<br>
* stop using endpoint_filter in each Session call. Instead create an Adapter with the discovery parameters needed.<br>
<br>
* add support for per-request microversions. Based on the structure currently in openstacksdk once the wrapped Session has been replaced with keystoneauth Adapter this should work really nicely.<br>
<br>
* drop incomplete version discovery support in favor of the support in keystoneauth<br>
<br>
* drop Profile object completely and replace its use internally with the os-client-config CloudConfig object.<br>
<br>
That's not a ton of work, TBH- I could probably do all of it in a single long plane flight. It will break advanced users who might have been using Profile (should be transparent to normal users) but as there is no 1.0 I think we should live with that. We might be able to make a shim layer for the Profile interface to avoid breaking people using the interface.<br>
<br>
*note on plugins*<br>
<br>
shade has a philosophy of not using plugins for service support that I'd like to apply here. All OpenStack services are welcome to add code directly. openstacksdk ALREADY contains code for tons of services. The only thing plug-ability adds in this context is the ability to use openstacksdk to support non-OpenStack services... and at this point I do not think that is valuable. The only place this is currently used is in the Profile anyway which allows defining an entrypoint to use to override a service - and since I'm proposing we kill the Profile object this all falls out as a matter of consequence.<br>
<br>
Integrating with shade<br>
----------------------<br>
<br>
The primary end-user concept in shade is an OpenStackCloud object, on which one performs actions. The service that provides the action is abstracted (this is done because actions such as 'list_images' may need to be done on the image service or the compute service, depending on the cloud). So a user does:<br>
<br>
cloud = shade.openstack_cloud(cloud='e<wbr>xample')<br>
images = cloud.list_images()<br>
<br>
The primary end-user concept in openstacksdk is the Connection, which has an object for each service. For example:<br>
<br>
conn = openstack.connection.from_conf<wbr>ig(cloud_name='example')<br>
images = conn.image.images()<br>
<br>
If we merge the two with the shade library being the primary interface, we could add the current sdk service proxy objects as attributes to the OpenStackCloud object, so that the following would work:<br>
<br>
cloud = shade.openstack_cloud(cloud='e<wbr>xample')<br>
images = cloud.list_images()<br>
images = cloud.image.images()<br>
<br>
If we did the merge the other way, we could either keep the Connection concept and stitch the shade helpers on to it:<br>
<br>
conn = openstack.connection.from_conf<wbr>ig(cloud_name='example')<br>
images = conn.list_images()<br>
images = conn.image.images()<br>
<br>
Or do a hybrid:<br>
<br>
cloud = openstack.cloud(name='example'<wbr>)<br>
images = cloud.list_images()<br>
images = cloud.image.images()<br>
<br>
If we go either of the routes of merging shade into openstacksdk then the shade library itself could just be a simple sugar layer for backwards compat that has things like:<br>
<br>
def openstack_cloud(cloud=None, *args, **kwargs):<br>
return openstack.cloud(name=cloud, *args, **kwargs)<br>
<br>
and<br>
<br>
class OpenStackCloud(openstack.cloud<wbr>):<br>
def __init__(self, cloud=None, *args, **kwargs):<br>
super(OpenStackCloud, self).__init__(name=cloud, *args, **kwargs)<br>
<br>
I kind of like the 'Connection' term, as it communicates that this is a thing that has and shares a discrete remote connection (which is a shared keystoneauth Session) OpenStackCloud in shade **ACTUALLY** describes a cloud-region (as regions in OpenStack are essentially independent clouds from an API consumption perspective. So I may be leaning more towards merging in that direction.<br>
<br>
* data model - shade has a data model contract for the resources its knows about. This actually fits nicely with the Resource construct in openstacksdk, although there are some differences. We should likely push the data model and normalization contract into the openstacksdk resource layer so that people get matching resources regardless of whether they use the shade interop layer or the low-level per-service layer.<br>
<br>
* Implement some constructor smarts for easy pass-through of sdk service proxy methods into shade wrapper methods. For MANY of the remote calls, the list_ get_ search_ create_ update_ and delete_ methods are (or can be) mechanical passthrough from the SDK objects. We should be able to write some smart constructor logic that makes all the passthrough methods for us and just have explicit methods defined for the places where the shade layer legitimately needs to do a bunch of logic (like image upload and auto-ip support)<br>
<br>
* Make openstack.resource2.Resource a subclass of munch.Munch. This will be fun. If we want to be able to have model/normalization happen at the lower-level, we'd ultimately want the shade methods to be able to just return the object the sdk layer produces. Shade's interface defines that we return Munch objects (basically things that behave like dicts and objects) That's VERY similar to what the Resource already does - so if we subclass Munch the shade behavior will hold and the sdk coding style should also be able to hold as it is today. Otherwise we'd have to have every return in shade wrap the object in a munch.Munch(resource.to_dict()<wbr>) which would lose information for when that object wants to be passed back in to the API later. (there are smarts in Resource for doing update things)<br>
<br>
* Migrate openstacksdk unit tests to use requests_mock. We're pretty much finished doing this in shade and it's been SUPER helpful. openstacksdk has a mock of the Session object in test_proxy_base2.py, so there's some good places to lay this in ... and as we merge the two obviously shade's requests_mock unittests will continue to apply - but given the sdk's test organization I think we can get some really solid results by moving the mocking to the actual rest payload layer instead of mocking out the Session itself. It also is a great way to verify that things work as expected with varying payloads - so as a user finds an edge-case from a specific cloud, grabbing an http trace from them and constructing a specific test is a great way to deal with regressions.<br>
<br>
We'll also need to make a specific rollout plan. shade has a strict backwards-compat policy, so if we merge sdk into shade, it goes from being 0.9 to being fully supported very quickly and we need to make sure we don't have anything exposed in a public interface we don't want to support for ages (resource -> resource2 should likely get finished and then resource2 should likely get renamed back to resource before such a release, for instance) If we merge the other direction and make the current shade a backwards-compat shim lib, we'll also need to cut a 1.0 of sdk pretty quickly, as whatever passthrough object we are exposing via the shade layer from sdk just adopted the shade backwards compat contract. I don't have a detailed plan for this yet, but if we decide to go down this path I'll make one.<br>
<br>
Other TODO list items<br>
---------------------<br>
<br>
It's not just all grunt work nobody can see. There's fun things to do too!<br>
<br>
* Add openstack.OpenStack or shade.AllClouds class - It's been a todo-list item for me for a while to make a wrapper class in shade that allows easy operations on a set of Clouds. Again, I like the semantics of openstack.OpenStack better - so that's another argument in favor of that direction of merge ... it would look something like this:<br>
<br>
# make Connection objects for every cloud-region in clouds.yaml<br>
clouds = openstack.OpenStack()<br>
# thread/asyncio parallel fetch of images across all clouds<br>
images = clouds.list_images()<br>
# all objects in shade have a "location" field which contains<br>
# cloud, region, domain and project info for the resource<br>
print([image for image in images if image.location.cloud='vexxhost<wbr>'])<br>
# Location is a required parameter for creation<br>
vexxhost = clouds.get_location(name='vexx<wbr>host')<br>
clouds.create_image(<br>
location=vexxhost, name='my-fedora-26',<br>
filename='fedora26.qcow2')<br>
<br>
Most of this work can actually likely be done with one smart metaclass ... finding the methods on OpenStackCloud and either doing a set of parallel gets on a list of objects or adding a location argument to write operations doesn't vary depending on the type of object. Since all the methods are named consistently, finding 'list_*' and making a set of corresponding list_ methods on the OpenStack object is essentially just one chunk of smart constructor.<br>
<br>
* Finish per-resource caching / batched client-side rate-limiting work. shade has a crazy cool ability to do batched and rate-limited operations ... this is how nodepool works at the scale it does. But it's currently only really plumbed through for server, floating-ip and port. (guess what nodepool has to deal with) This should be generalized to all of the resources, configurable on a per-resource name in clouds.yaml, and should work whether high or low level interfaces are used. This is super hard to get RIGHT, so it's one of those "spend 4 weeks writing 10 lines of code" kind of things, but it's also super important.<br>
<br>
* Implement a flag for toggling list/client-side-filter vs. remote-get operations. Most single-resource operations in shade are actually done as a list followed by a client-side filter. Again this model is there to support nodepool-scale (amusingly enough it makes it less load on the clouds at scale) but at small scale it is more costly and some users find it annoying. We've discussed having the ability to toggle this at constructor time - and then having things like the ansible modules default the flag to use remote-get instead of list/filter - since those are doing lots of independent processes so the optimization of list/filter isn't ever realized.<br>
<br>
* Implement smarter and more comprehensive "pushdown" filtering. I think we can piggyback a bunch of this off of the SDK layer - but there are attributes that can be used for server-side filtering, there are attributes that can't, and there are attributes that are client-side created via normalization that either can be translated into a server-side filter or must be client-side filtered. Resource has the structure for dealing with this sanely I believe, but it needs to be tracked through.<br>
<br>
* Add python3-style type annotations. We just started doing this in zuul and it's pretty cool - and is possible to do in a python2 compatible way.<br>
<br>
Longer Goals<br>
------------<br>
<br>
That gets us a home for openstacksdk, a path towards consolidation of effort and a clear story for our end users. There are a few longer-term things we should be keeping in mind as we work on this:<br>
<br>
* suitability for python-openstackclient. Dean and Steve have been laying in the groundwork for doing direct-REST in python-openstackclient because python-*client are a mess from an end-user perspective and openstacksdk isn't suitable. If we can sync on requirements hopefully we can produce something that python-openstackclient can honestly use for that layer instead of needing local code.<br>
<br>
* suitability for heat/horizon - both heat and horizon make calls to other OpenStack services as a primary operation (plenty of services make service-to-service calls, but for heat and horizon is a BIG part of their life) The results of this work should allow heat and horizon to remove local work they have using python-*client, doing local version discovery or any of the rest - and should expose to them rich primitives they can use easily.<br>
<br>
Conclusion<br>
----------<br>
<br>
As I mentioned at the top, I'd been thinking some of this already and had planned on chatting with folks in person at the PTG, but it seems we're at a place where that's potentially counter productive.<br>
<br>
Depending on what people think I can follow this up with some governance resolutions and more detailed specs.<br>
<br>
Thanks!<span class="HOEnZb"><font color="#888888"><br>
Monty</font></span><div class="HOEnZb"><div class="h5"><br>
<br>
______________________________<wbr>______________________________<wbr>______________<br>
OpenStack Development Mailing List (not for usage questions)<br>
Unsubscribe: <a href="http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe" rel="noreferrer" target="_blank">OpenStack-dev-request@lists.op<wbr>enstack.org?subject:unsubscrib<wbr>e</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" rel="noreferrer" target="_blank">http://lists.openstack.org/cgi<wbr>-bin/mailman/listinfo/openstac<wbr>k-dev</a><br>
</div></div></blockquote></div><br><br clear="all"><div><br></div>-- <br><div class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div dir="ltr"><div dir="ltr"><div style="font-size:small"><div dir="ltr"><div dir="ltr">Kind regards,<br><br>Melvin Hillsman</div><div dir="ltr"><a href="mailto:mrhillsman@gmail.com" style="color:rgb(17,85,204)" target="_blank">mrhillsman@gmail.com</a><br>mobile: (832) 264-2646<br></div></div></div></div></div></div></div></div></div>
</div>