root disk for instance
Jyoti Dahiwele
jyotishri403 at gmail.com
Thu Jul 11 15:22:45 UTC 2019
Thanks, I'll check it out.
On Thu, 11 Jul 2019, 20:18 Eugen Block, <eblock at nde.ag> wrote:
> > I'm referring this link
> >
> https://docs.openstack.org/mitaka/admin-guide/compute-images-instances.html
> > to understand about root disk. In this it is saying root disk will come
> > from compute node. What is the location of root disk on compute node?
>
> First, use a later release, Mitaka is quite old, Stein is the current
> release.
>
> If you use a default configuration without any specific backend for
> your instances they will be located on the compute nodes in
> /var/lib/nova/instances/. The respective base images would reside in
> /var/lib/nova/instances/_base, so your compute nodes should have
> sufficient disk space.
>
> > If I want to keep all my vms on shared storage . How to configure it ?
>
> That's up to you, there are several backends supported, so you'll have
> to choose one. Many people including me use Ceph as storage backend
> for Glance, Cinder and Nova.
>
> > Or If I want to keep all my vms on cinder volume. What will be the
> > configuration for it on nova and cinder?
>
> I recommend to set up a lab environment where you can learn to set up
> OpenStack, then play around and test different backends if required.
> The general configuration requirements are covered in the docs [1]. If
> you don't want to configure every single service you can follow a
> deployment guide [2], but that will require skills in ansible, juju or
> tripleO. I'd recommend the manual way, that way you learn the basics
> and how the different components interact.
>
> [1] https://docs.openstack.org/stein/install/
> [2] https://docs.openstack.org/stein/deploy/
>
>
> Zitat von Jyoti Dahiwele <jyotishri403 at gmail.com>:
>
> > Thanks for your reply.
> > I'm referring this link
> >
> https://docs.openstack.org/mitaka/admin-guide/compute-images-instances.html
> > to understand about root disk. In this it is saying root disk will come
> > from compute node. What is the location of root disk on compute node?
> >
> > If I want to keep all my vms on shared storage . How to configure it ?
> >
> > Or If I want to keep all my vms on cinder volume. What will be the
> > configuration for it on nova and cinder?
> >
> >
> > On Thu, 11 Jul 2019, 15:24 Eugen Block, <eblock at nde.ag> wrote:
> >
> >> Hi,
> >>
> >> it's always glance that serves the images, it just depends on how you
> >> decide to create the instance, ephemeral or persistent disks. You can
> >> find more information about storage concepts in [1].
> >>
> >> If I'm not completely wrong, since Newton release the default in the
> >> Horizon settings is to create an instance from volume, so it would be
> >> a persistent disk managed by cinder (the volume persists after the
> >> instance has been deleted, this is also configurable). The image is
> >> downloaded from glance into a volume on your volume server.
> >>
> >> If you change the Horizon behavior or if you launch an instance from
> >> the cli you'd get an ephemeral disk by nova, depending on your storage
> >> backend this would be a local copy of the image on the compute node(s)
> >> or something related in your storage backend, e.g. an rbd object in
> >> ceph.
> >>
> >> Does this clear it up a bit?
> >>
> >> Regards,
> >> Eugen
> >>
> >> [1]
> >>
> >>
> https://docs.openstack.org/arch-design/design-storage/design-storage-concepts.html
> >>
> >>
> >> Zitat von Jyoti Dahiwele <jyotishri403 at gmail.com>:
> >>
> >> > Dear Team,
> >> >
> >> > Please clear me my following doubts.
> >> > When I use image from source option and mini flavor to create an
> >> instace,
> >> > from which storage pool instance will get root disk ? From cinder or
> >> glance?
> >>
> >>
> >>
> >>
> >>
>
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-discuss/attachments/20190711/b37008f7/attachment.html>
More information about the openstack-discuss
mailing list