[openstack-dev] OPENSTACK with CEPH storage backend cant down size an instance

Adam Lawson alawson at aqorn.com
Mon Jul 27 00:28:30 UTC 2015


Agreed on resizing. Perhaps create a snapshot of the old VM (assuming
!ephemeral storage), and boot a new VM with a smaller disk size from the
snapshot.

Future reference you can leverage SAN-specific tools to save unused disk
such as thin provisioning and/or the vendor's dedup capabilities.




*Adam Lawson*

AQORN, Inc.
427 North Tatnall Street
Ste. 58461
Wilmington, Delaware 19801-2230
Toll-free: (844) 4-AQORN-NOW ext. 101
International: +1 302-387-4660
Direct: +1 916-246-2072


On Fri, Jul 24, 2015 at 12:11 PM, Monty Taylor <mordred at inaugust.com> wrote:

> On 07/24/2015 12:47 PM, Clint Byrum wrote:
> > Excerpts from James Galvin's message of 2015-07-24 09:08:36 -0700:
> >> Hi All
> >>
> >> I am having some trouble with down sizing an instance,
> >>
> >> I can resize the instance from say small flavour to medium flavour but
> when trying to resize the instance back from medium to small
> >>
> >> I get the following :
> >>
> >> Error: Failed to perform requested operation on instance "jg-10", the
> instance has an error status: Please try again later [Error: Flavor's disk
> is too small for requested image.].
> >>
> >> I am using ceph as the storage backend clustered over 3 nodes with 3
> pools "volumes" "vms" "images"
> >>
> >
> > In addition to the note already made about reducing filesystem sizes,
> > I just want to reaffirm that resize is really not the way you want to
> > be using clouds, and IMO should be removed from Nova (but there's enough
> > people who disagree with me that it will probably stay).
> >
> > Anyway, I suggest never using resize, and just deploying new servers,
> > running tests, and then deleting the old ones. Having cloud with
> > flexibility and space for this is why you have a cloud.
>
> As a person who runs a system with both long-lived pets and cattle that
> we grind in to food, I can attest that we do not use resize. It is a
> much longer downtime/risk operation than you want.
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150726/935ef45c/attachment.html>


More information about the OpenStack-dev mailing list