[nova] super long online_data_migrations

Surya Seetharaman surya.seetharaman9 at gmail.com
Mon Apr 1 09:30:27 UTC 2019


Hi Mohammed,

On Mon, Apr 1, 2019 at 4:29 AM Mohammed Naser <mnaser at vexxhost.com> wrote:

> On Sun, Mar 31, 2019 at 10:21 PM Mohammed Naser <mnaser at vexxhost.com>
> wrote:
> >
> > Hi there,
> >
> > During upgrades, I've noticed that when running online_data_migrations
> > with "infinite-until-done" mode, it loops over all of the migrations
> > one by one.
> >
> > However, one of the online data migrations
> > (instance_obj.populate_missing_availability_zones) makes a query that
> > takes a really long time as it seems inefficient (which eventually
> > results in 0, cause it already ran), which means as it loops in
> > "blocks" of 50, there's almost a 2-3 to 8 minute wait in really large
> > environments.
>

Hmm, all we do in that migration is try to get instance records whose
availability_zone is None [1] and if no records are found we just return
all done. While I agree that once a migration is done, the next time we
loop through all the migrations we again do the query at least once to
ensure we get back zero records for most of the migrations (we don't always
use persistent markers to see if the migration was completed in the
previous run) which means we do run through the whole table.



> >
> > The question ends up in specific:
> >
> > SELECT count(*) AS count_1
> > FROM (SELECT instance_extra.created_at AS instance_extra_created_at,
> > instance_extra.updated_at AS instance_extra_updated_at,
> > instance_extra.deleted_at AS instance_extra_deleted_at,
> > instance_extra.deleted AS instance_extra_deleted, instance_extra.id AS
> > instance_extra_id, instance_extra.instance_uuid AS
> > instance_extra_instance_uuid
> > FROM instance_extra
> > WHERE instance_extra.keypairs IS NULL AND instance_extra.deleted = 0) AS
> anon_1
> >
>

This is the keypair_obj.migrate_keypairs_to_api_db migration that was added
in Newton.
Since we are just counting, we need not pull the whole record I guess (not
sure how much improvement that would cause),  I am myself not an SQL
expert, maybe jaypipes can help here.


> The explain for the DB query in this example:
> >
> >
> +------+-------------+----------------+------+---------------+------+---------+------+--------+-------------+
> > | id   | select_type | table          | type | possible_keys | key  |
> > key_len | ref  | rows   | Extra       |
> >
> +------+-------------+----------------+------+---------------+------+---------+------+--------+-------------+
> > |    1 | SIMPLE      | instance_extra | ALL  | NULL          | NULL |
> > NULL    | NULL | 382473 | Using where |
> >
> +------+-------------+----------------+------+---------------+------+---------+------+--------+-------------+
> >
> > It's possible that it can be ever worse, as this number is from
> > another very-long running environments.
> >
> >
> +------+-------------+----------------+------+---------------+------+---------+------+---------+-------------+
> > | id   | select_type | table          | type | possible_keys | key  |
> > key_len | ref  | rows    | Extra       |
> >
> +------+-------------+----------------+------+---------------+------+---------+------+---------+-------------+
> > |    1 | SIMPLE      | instance_extra | ALL  | NULL          | NULL |
> > NULL    | NULL | 3008741 | Using where |
> >
> +------+-------------+----------------+------+---------------+------+---------+------+---------+-------------+
> >
> > I'm not the SQL expert, could we not optimize this?  Alternatively,
> > could we update the online data migrations code to "pop out" any of
> > the migrations that return 0 for the next iteration, that way it only
> > works on those online_data_migrations that *have* to be done, and
> > ignore those it knows are done?
>

I don't know if there is a good way by which we can persistently store the
state of finished migrations to ensure they are not executed ever again (as
in not having to make the query) once done.
It would also be nice to also be able to opt-in into specific migrations
specially since these span over releases.


>
> and while we're at it, can we just bump the default rows-per-run to
> something more than
> 50 rows? it seems super .. small :)
>
>
I agree the default 50 is a pretty small batch size specially for large
deployments.

[1]
https://github.com/openstack/nova/blob/95a87bce9fa7575c172a7d06344fd3cd070db587/nova/objects/instance.py#L1302

Thanks for bringing this up,
Regards,
Surya.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-discuss/attachments/20190401/8be3801f/attachment-0001.html>


More information about the openstack-discuss mailing list