[openstack-dev] [requirements][lbaas] gunicorn to g-r

Thomas Goirand zigo at debian.org
Wed Oct 19 13:01:58 UTC 2016


On 10/18/2016 08:25 PM, Monty Taylor wrote:
> On 10/18/2016 12:05 PM, Adam Harwell wrote:
>> Inline comments.
>>
>> On Wed, Oct 19, 2016 at 1:38 AM Thomas Goirand <zigo at debian.org
>> <mailto:zigo at debian.org>> wrote:
>>
>>     On 10/18/2016 02:37 AM, Ian Cordasco wrote:
>>     > On Oct 17, 2016 7:27 PM, "Thomas Goirand" <zigo at debian.org
>>     <mailto:zigo at debian.org>
>>     > <mailto:zigo at debian.org <mailto:zigo at debian.org>>> wrote:
>>     >>
>>     >> On 10/17/2016 08:43 PM, Adam Harwell wrote:
>>     >> > Jim, that is exactly my thought -- the main focus of g-r as far
>>     as I was
>>     >> > aware is to maintain interoperability between project
>>     dependencies for
>>     >> > openstack deploys, and since our amphora image is totally
>>     separate, it
>>     >> > should not be restricted to g-r requirements.
>>     >>
>>     >> The fact that we have a unified version number of a given lib in
>>     all of
>>     >> OpenStack is also because that's a requirement of downstream distros.
>>     >>
>>     >> Imagine that someone would like to build the Octavia image using
>>     >> exclusively packages from <your-favorite-distro-here>...
>>     >>
>>     >> > I brought this up, but
>>     >> > others thought it would be prudent to go the g-r route anyway.
>>     >>
>>     >> It is, and IMO you should go this route.
>>     >
>>     > I'm not convinced by your arguments here, Thomas. If the distributor
>>     > were packaging Octavia for X but the image is using some other
>>     operating
>>     > system, say Y, why are X's packages relevant?
>>
>>     What if operating systems would be the same?
>>
>> We still want to install from pypi, because we still want deployers to
>> build images for their cloud using our DIB elements. There is absolutely
>> no situation in which I can imagine we'd want to install a binary
>> packaged version of this. There's a VERY high chance we will soon be
>> using a distro that isn't even a supported OpenStack deploy target...
>>
>>
>>     As a Debian package maintainer, I really prefer if the underlying images
>>     can also be Debian (and preferably Debian stable everywhere).
>>
>> Sure, I love Debian too, but we're investigating things like Alpine and
>> Cirros as our base image, and there's pretty much zero chance anyone
>> will package ANY of our deps for those distros. Cirros doesn't even have
>> a package manager AFAIK. 
>>
>>
>>     > I would think that if this
>>     > is something inside an image going to be launched by Octavia that
>>     > co-installibilty wouldn't really be an issue.
>>
>>     The issue isn't co-instability, but the fact that downstream
>>     distribution vendors will only package *ONE* version of a given python
>>     module. If we have Octavia with version X, and another component of
>>     OpenStack with version Y, then we're stuck with Octavia not being
>>     packageable in downstream distros.
>>
>> Octavia will not use gunicorn for its main OpenStack API layer. It will
>> continue to be packagable regardless of whether gunicorn is available.
>> Gunicorn is used for our *amphora image*, which is not part of the main
>> deployment layer. It is part of our *dataplane*. It is unrelated to any
>> part of Octavia that is deployed as part of the main service layer of
>> Openstack. In fact, in production, deployers may completely ignore
>> gunicorn altogether and use a different solution, that is up to the way
>> they build their amphora image (which, again, is not part of the main
>> deployment). We just use gunicorn in the image we use for our gate tests.
>>
>>
>>     > I don't lean either way right now, so I'd really like to
>>     understand your
>>     > point of view, especially since right now it isn't making much
>>     sense to me.
>>
>>     Do you understand now? :)
>>
>> I see what you are saying, but I assert it does not apply to our case at
>> all. Do you see how our case is different? 
> 
> I totally understand, and I can see why it would seem very different.
> Consider a few things though:
> 
> - OpenStack tries its best to not pick favorites for OS, and I think the
> same applies to guest VMs, even if they just seem like appliances. While
> we as upstream may be looking at using something like alpine as the base
> OS for the service VM appliance, that does not necessarily imply that
> all deployers _must_ use Alpine in their service VM, for exactly the
> reason you mention (you intend for them to run diskimage-builder themselves)
> 
> - If a deployer happens to have a strong preference for a given OS (I
> know I've been on customer calls where an OpenStack product having a tie
> to a particular OS that is not the one that is in the vetted choice
> otherwise at that customer was an issue) - then the use of dib by the
> deployer allows them to choose to base their service VM on the OS of
> their choice. That's pretty awesome.
> 
> - If that deployer similarly has an aversion to deploying any software
> that didn't come from distro packages, one could imagine that they would
> want their diskimage-builder run to install the python code not from pip
> but instead from apt/dnf/zypper. There is obviously nothing stoppping that.
> 
> - Finally, if Debian, Ubuntu, Red Hat, SuSE or Gentoo chose to want to
> make the parts available in their distro such that a diskimage-builder
> run using their OS as a base OS and using packages to install the python
> code would work, then it would be the same OS repo as it would for other
> things. For the distros that only allow one version of a particular
> piece of software at a time, that would mean they would need packages of
> the software that you expect to be installed inside the service VM and
> its dependencies.
> 
> So while it's different as a developer in the gate, the principles
> behind why we share a set of global-requirements still hold true.
> 
> That said - as I mentioned on IRC, I have no personal issue adding
> gunicorn to global-requirements. Seems like a fine choice to me. I
> mostly barfed the above as a long-winded attempt at explaining how even
> though it is different, it's still the same. In fact, I think the fact
> that it's the same goes to show you've been doing a good job. :)
> 
> Monty

I wouldn't have say it in a better way. I fully agree with the above.
Thanks.

Cheers,

Thomas Goirand (zigo)




More information about the OpenStack-dev mailing list