[Openstack] nova backup - instances unreachable

Mohammed Naser mnaser at vexxhost.com
Wed Jan 11 14:55:57 UTC 2017


Hi John,

It just works for us with Mitaka.  You might be running with issues regarding permissions where the Nova user might not be able to write to the images pool. 

Turn debug on in your nova compute and snapshot a machine on it, you'll see the logs and if it's turning it off, it's probably because your rbd snapshot failed (in my experience) and it fell back to the older snapshot process. 

Thanks
Mohammed 

Sent from my iPhone

> On Jan 11, 2017, at 9:22 AM, John Petrini <jpetrini at coredial.com> wrote:
> 
> Hi Eugen,
> 
> Thanks for the response! That makes a lost of sense and is what I figured was going on but I missed it in the documentation. We use Ceph as well and I had considered doing the snapshots at the RBD level but I was hoping there was someway to accomplish this via nova. I came across this Sebastien Han write-up that claims this functionality was added to Mitaka: http://www.sebastien-han.fr/blog/2015/10/05/openstack-nova-snapshots-on-ceph-rbd/
> 
> We are running Mitaka but our snapshots are not happening at the RBD level, they are being copied and uploaded to glance which takes up a lot of space and is very slow.
> 
> Have you or anyone else implemented this in Mitaka? Other than Sebastian's blog I haven't found any documentation on this.
> 
> Thank You,
> 
> ___
> 
> John Petrini
> 
>> On Wed, Jan 11, 2017 at 3:32 AM, Eugen Block <eblock at nde.ag> wrote:
>> Hi,
>> 
>> this seems to be exptected, the docs say:
>> 
>> "Shut down the source VM before you take the snapshot to ensure that all data is flushed to disk."
>> 
>> So if the VM is not shut down, it's freezed to prevent data loss (I guess). Depending on your storage backend, there are other ways to perform backups of your VMs.
>> We use Ceph as backend for nova, glance and cinder. Ceph stores the disks, images and volumes as Rados block device objects. We have a backup script that creates snapshots of these RBDs, which are exported to our backup drive. This way the running VM is not stopped or freezed, the user doesn't notice any issues. Unlike a nova snapshot, the rbd snapshot is created immediately within a few seconds. After a successful backup the snapshots are removed.
>> 
>> Hope this helps! If you are interested in Ceph, visit [1].
>> 
>> Regards,
>> Eugen
>> 
>> [1] http://docs.ceph.com/docs/giant/start/intro/
>> 
>> 
>> Zitat von John Petrini <jpetrini at coredial.com>:
>> 
>> 
>>> Hello,
>>> 
>>> I've just started experimenting with nova backup and discovered that there
>>> is a period of time during the snapshot where the instance becomes
>>> unreachable. Is this behavior expected during a live snapshot? Is there any
>>> way to prevent this?
>>> 
>>> ___
>>> 
>>> John Petrini
>> 
>> 
>> 
>> -- 
>> Eugen Block                             voice   : +49-40-559 51 75
>> NDE Netzdesign und -entwicklung AG      fax     : +49-40-559 51 77
>> Postfach 61 03 15
>> D-22423 Hamburg                         e-mail  : eblock at nde.ag
>> 
>>         Vorsitzende des Aufsichtsrates: Angelika Mozdzen
>>           Sitz und Registergericht: Hamburg, HRB 90934
>>                   Vorstand: Jens-U. Mozdzen
>>                    USt-IdNr. DE 814 013 983
>> 
>> 
>> _______________________________________________
>> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> Post to     : openstack at lists.openstack.org
>> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> 
> _______________________________________________
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to     : openstack at lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20170111/45d9e67b/attachment.html>


More information about the Openstack mailing list