Hi all, At OVH we needed to write our own tool that archive data from OpenStack databases to prevent some side effect related to huge tables (slower response time, changing MariaDB query plan) and to answer to some legal aspects. So we started to write a python tool which is called OSArchiver that I briefly presented at Denver few days ago in the "Optimizing OpenStack at large scale" talk. We think that this tool could be helpful to other and are ready to open source it, first we would like to get the opinion of the ops community about that tool. To sum-up OSArchiver is written to work regardless of Openstack project. The tool relies on the fact that soft deleted data are recognizable because of their 'deleted' column which is set to 1 or uuid and 'deleted_at' column which is set to the date of deletion. The points to have in mind about OSArchiver: * There is no knowledge of business objects * One table might be archived if it contains 'deleted' column * Children rows are archived before parents rows * A row can not be deleted if it fails to be archived Here are features already implemented: * Archive data in an other database and/or file (actually SQL and CSV formats are supported) to be easily imported * Delete data from Openstack databases * Customizable (retention, exclude DBs, exclude tables, bulk insert/delete) * Multiple archiving configuration * Dry-run mode * Easily extensible, you can add your own destination module (other file format, remote storage etc...) * Archive and/or delete only mode It also means that by design you can run osarchiver not only on OpenStack databases but also on archived OpenStack databases. Thanks in advance for your feedbacks. -- Pierre-Samuel Le Stang