[openstack-dev] [tc] revised Postgresql support status patch for governance

Monty Taylor mordred at inaugust.com
Sun May 21 19:51:19 UTC 2017


On 05/18/2017 02:49 PM, Sean Dague wrote:
> On 05/18/2017 01:02 PM, Mike Bayer wrote:
>>
>>
>> On 05/17/2017 02:38 PM, Sean Dague wrote:
>>>
>>> Some of the concerns/feedback has been "please describe things that are
>>> harder by this being an abstraction", so examples are provided.
>>
>> so let's go through this list:
>>
>> - OpenStack services taking a more active role in managing the DBMS
>>
>> mmmm, "managing" is vague to me, are we referring to the database
>> service itself, e.g. starting / stopping / configuring?   installers
>> like tripleo do this now, pacemaker is standard in HA for control of
>> services, I think I need some background here as to what the more active
>> role would look like.
>
> I will leave that one for mordred, it was his concern.

I have written a novel on this topic just now in a thread titled

   "[tc] Active or passive role with our database layer"

>>
>>
>> - The ability to have zero down time upgrade for services such as
>>   Keystone.
>>
>> So "zero down time upgrades" seems to have broken into:
>>
>> * "expand / contract with the code carefully dancing around the
>> existence of two schema concepts simultaneously", e.g. nova, neutron.
>> AFAIK there is no particular issue supporting multiple backends on this
>> because we use alembic or sqlalchemy-migrate to abstract away basic
>> ALTER TABLE types of feature.

Agree. But there are still issues with designing the schema upgrades 
themselves to be compatible with replication streams or other online 
schema update constraints.

>> * "expand / contract using server side triggers to reconcile the two
>> schema concepts", e.g. keystone.   This is more difficult because there
>> is currently no "trigger" abstraction layer.   Triggers represent more
>> of an imperative programming model vs. typical SQL,  which is why I've
>> not taken on trying to build a one-size-fits-all abstraction for this in
>> upstream Alembic or SQLAlchemy.   However, it is feasible to build a
>> "one-size-that-fits-openstack-online-upgrades" abstraction.  I was
>> trying to gauge interest in helping to create this back in the
>> "triggers" thread, in my note at
>> http://lists.openstack.org/pipermail/openstack-dev/2016-August/102345.html,
>> which also referred to some very raw initial code examples.  However, it
>> received strong pushback from a wide range of openstack veterans, which
>> led me to believe this was not a thing that was happening.   Apparently
>> Keystone has gone ahead and used triggers anyway, however I was not
>> pulled into that process.   But if triggers are to be "blessed" by at
>> least some projects, I can likely work on this problem for MySQL /
>> Postgresql agnosticism.  If keystone is using triggers right now for
>> online upgrades, I would ask, are they currently working on Postgresql
>> as well with PG-specific triggers, or does Postgresql degrade into a
>> "non-online" migration scenario if you're running Keystone?
>
> This is the triggers conversation, which while I have issues with, is
> the only path forward now if you are doing keystone in a load balancer
> and need to retain HA through the process.

I also have issues with this- and I continue to reject categorically the 
assertion that it's the only path forward.

It's not a normal or suggested way to deal with this. There ARE 
best-practice suggested ways to deal with this ... but to the point of 
the other email, they require being more intimate with the HA architecture.

> No one is looking at pg here. And yes, everything not mysql would just
> have to take the minimal expand / contract downtime. Data services like
> Keystone / Glance whose data is their REST API definitely have different
> concerns than Nova dropping it's control plane for 30s to recycle code
> and apply db schema tweaks.

Depending on the app, nova's control plane is just as much of a concern. 
I agree- there are certainly plenty of workloads out there where it's 
not - but there is an issue at hand that needs to be solved and needs to 
be solved one time and then always work.

>> - Consistent UTF8 4 & 5 byte support in our APIs
>>
>> "5 byte support" appears to refer to utf-8's ability to be...well a
>> total of 6 bytes.    But in practice, unicode itself only needs 4 bytes
>> and that is as far as any database supports right now since they target
>> unicode (see https://en.wikipedia.org/wiki/UTF-8#Description).  That's
>> all any database we're talking about supports at most.  So...lets assume
>> this means four bytes.
>
> The 5 byte statement came in via a bug to Nova, it might have been
> confused, and I might have been confused in interpretting it. Lets
> assume it's invalid now and move to 4 byte.

Yes.

>>
>> From the perspective of database-agnosticism with regards to database
>> and driver support for non-ascii characters, this problem has been
>> solved by SQLAlchemy well before Python 3 existed when many DBAPIs would
>> literally crash if they received a u'' string, and the rest of them
>> would churn out garbage; SQLAlchemy implemented a full encode/decode
>> layer on top of the Python DBAPI to fix this.  The situation is vastly
>> improved now that all DBAPIs support unicode natively.
>>
>> However, on the MySQL side there is this complexity that their utf-8
>> support is a 3-byte only storage model, and you have to use utf8mb4 if
>> you want the four byte model.   I'm not sure right now what projects are
>> specifically hitting issues related to this.
>>
>> Postgresql doesn't have such a limitation.   If your Postgresql server
>> or specific database is set up for utf-8 (which should be the case),
>> then you get full utf-8 character set support.
>>
>> So I don't see the problem of "consistent utf8 support" having much to
>> do with whether or not we support Posgtresql - you of course need your
>> "CREATE DATABASE" to include the utf8 charset like we do on MySQL, but
>> that's it.
>
> That's where we stand which means that we're doing 3 byte UTF8 on MySQL,
> and 4 byte on PG. That's actually an API facing difference today. It's
> work to dig out of from the MySQL side, maybe the PG one is just all
> super cool and done. But it's still a consideration point.

The biggest concern for me is that we're letting API behavior be 
dictated by database backend and/or database config choices. The API 
should behave like the API behaves.

>> - The requirement that Postgresql libraries are compiled for new users
>>   trying to just run unit tests (no equiv is true for mysql because of
>>   the pure python driver).
>>
>> I would suggest that new developers for whom the presence of things like
>> postgresql client libraries is a challenge (but somehow they are running
>> a MySQL server for their pure python driver to talk to?)  don't actually
>> have to worry about running the tests against Postgresql, this is how
>> the "opportunistic" testing model in oslo.db has always worked; it only
>> runs for the backends that you have set up.
>>
>> Also, openstack got all the way through Kilo approximately using the
>> native python-MySQL driver which required a compiled client library as
>> well as the MySQL dependencies be installed.  The psycopg2 driver has a
>> ton of whl's up on pypi (https://pypi.python.org/pypi/psycopg2) and all
>> linux distros supply it as a package in any case, so an actual "compile"
>> should not be needed.   Also, this is Openstack....it's basic existence
>> is a kind of this vastly enormous glue between thousands of native
>> (ultimately C-compiled) libraries and packages, and it runs only
>> on...linux.    So this is just a weird point to bring up.  Seems a
>> little red herring to me.
>
> They aren't running a database. This is one of those areas where pypi
> wheel building got a lot better since the last time I looked, and one of
> the reasons we had this whole bindep system in OpenStack. Because of how
> test-requirements are installed, the drivers are installed in tox
> whether or not they are used, because there is no good way to late
> install them in runs. Whether or not they are used is based on whether a
> db is setup.
>
> But, cool, pypi wheels are good enough that we can delete the need for
> all these headers for end users, very cool.
>
>> - Consistency around case sensitivity collation defaults that lead to
>>   strange bugs around searching/updating names in resources.
>>
>> Finally, a real issue-ish thing that requires a resolution.    So the
>> good news here is that while MySQL is defaulting to case-insensitive
>> collations (which I assume we like) but Postgresql has almost no support
>> for case-insensitive collations (unless you use this odd CITEXT
>> datatype), it is possible to make a case-sensitive collation style
>> become case-insensitive at SQL query time much more easily than it would
>> be to go the other way.
>>
>> SQLAlchemy already has some case-insensitive operators, most notably
>> "ilike()", which is a case-insensitive "LIKE" that is backend agnostic.
>>  If these search queries are just using LIKE then they only need use
>> ilike() from SQLAlchemy instead of like().
>>
>> If we are talking about the full range of operators like ==, !=, <, >
>> etc., and/or if we are also concerned that developers may use like()
>> when they really need to use ilike(), the issue can be addressed at the
>> typing level as well.  Using SQLAlchemy, a String datatype that
>> guarantees case-insensitive comparisons is straightforward to construct.
>>   This would be a Python side replacement for the String type, and
>> possibly Text, Unicode, etc. as needed.   It does **not** imply any
>> database schema migrations.   The hypothetical CaseInsensitiveString
>> would override all the comparison operators to ensure that on the
>> Postgresql (or other case-sensitive) backend, both sides of the
>> expression are embedded within the SQL LOWER() function, so that these
>> comparisons act case insensitively.    The trick then is to get the
>> downstream projects to use this type (which would be in oslo.db) in
>> their models, which is the usual herding cats job.  But this is a pretty
>> solvable problem.
>
> Sure, it's work. But that's fine. The point of that list was that there
> is stuff that is work because SQLA is a leaky abstraction. Which is fine
> if there are people taking that work off the table.

I would not characterize this as SQLA being a leaky abstraction.

I'd say that at some point we didn't make a decision as to what we 
wanted to do with text input and how it would be stored or not stored 
and how it would be searched and sorted. Case sensitive collations have 
been available to us the entire time, but we never decided whether our 
API was case sensitive or case insensitive. OR - we *DID* decide that 
our API is case insensitive the fact that it isn't on some deployments 
is a bug. I'm putting money on the 'nobody made a decision' answer.




More information about the OpenStack-dev mailing list