[Openstack] Understanding "flavors" of VM

Marco CONSONNI mcocmo62 at gmail.com
Wed Dec 5 07:59:53 UTC 2012


Hello Ahmed,

good investigation: there's something I knew and something that I didn't.

As far as I understood, _base directory should be a cache for images, NOT a
directory used for instances.

I mean, compute nodes keep an image cache for preventing the download from
glance each and every time they need  to start an instance.

To be honest it seems like I missed something because, from your
investigation, the storage is kept under _base. Strange. I didn't know that.

Thanks,
Marco.




On Tue, Dec 4, 2012 at 6:35 PM, Ahmed Al-Mehdi <ahmedalmehdi at gmail.com>wrote:

> Hi Marco,
>
> This is really good stuff, thank you very much for helping out.  I am
> creating some instances to test out how/where the different storage related
> elements are created.
>
> I created two VM instance:
>
> Instance 1 : 20GB boot disk
> Instance 2 : 10GB boot disk, 2 GB Ephemeral disk.
>
> root at bodega:/var/lib/nova# ls -lh -R instances
> instances:
> total 12K
> drwxrwxr-x 2 nova nova 4.0K Dec  4 09:01 _base
> drwxrwxr-x 2 nova nova 4.0K Nov 28 11:44 instance-00000001
> drwxrwxr-x 2 nova nova 4.0K Dec  4 09:01 instance-00000002
>
> instances/_base:
> total 240M
> -rw-r--r-- 1 nova         nova  40M Dec  4 08:51
> 8af61c9e86557f7244c6e5a2c45e1177c336bd1f
> -rw-r--r-- 1 libvirt-qemu kvm   10G Dec  4 09:01
> 8af61c9e86557f7244c6e5a2c45e1177c336bd1f_10
> -rw-r--r-- 1 nova         kvm   20G Dec  4 08:51
> 8af61c9e86557f7244c6e5a2c45e1177c336bd1f_20
> -rw-rw-r-- 1 nova         nova 9.4M Nov 28 11:44
> 8af61c9e86557f7244c6e5a2c45e1177c336bd1f.part
> -rw-r--r-- 1 nova         nova 2.0G Dec  4 09:01 ephemeral_0_2_None
> <======
> -rw-r--r-- 1 libvirt-qemu kvm  2.0G Dec  4 09:01 ephemeral_0_2_None_2
> <=====
>
> instances/instance-00000001:
> total 1.9M
> -rw-rw---- 1 nova         kvm   26K Nov 28 11:45 console.log
> -rw-r--r-- 1 libvirt-qemu kvm  1.9M Dec  4 07:01 disk
> -rw-rw-r-- 1 nova         nova 1.4K Nov 28 11:44 libvirt.xml
>
> instances/instance-00000002:
> total 1.8M
> -rw-rw---- 1 libvirt-qemu kvm   27K Dec  4 09:02 console.log
> -rw-r--r-- 1 libvirt-qemu kvm  1.6M Dec  4 09:03 disk
> -rw-r--r-- 1 libvirt-qemu kvm  193K Dec  4 09:01 disk.local
> -rw-rw-r-- 1 nova         nova 1.6K Dec  4 09:01 libvirt.xml
> root at bodega:/var/lib/nova#
>
> It seems all the boot disk and ephemeral disk are created as files in
> /var/lib/nova/instance/_base.  I don't understand why there are two files
> of size 2GB  (lines marked above with <=====).  I will look into that later
> on.
>
> I am running into an issue creating a volume for which I will post a
> separate message.
>
> Thank you again very much.
>
> Regards,
> Ahmed.
>
>
>
>
> On Tue, Dec 4, 2012 at 8:56 AM, Marco CONSONNI <mcocmo62 at gmail.com> wrote:
>
>> Sorry, the directory you need to check is  /var/lib/nova/instances.
>>
>> MCo.
>>
>>
>> On Tue, Dec 4, 2012 at 5:54 PM, Marco CONSONNI <mcocmo62 at gmail.com>wrote:
>>
>>> Hi Ahmed,
>>>
>>> very technical questions.
>>> I'm not sure my answers are right: I'm just an user...
>>>
>>> In order to answer, I've just look at what happens and made some guess.
>>> Feel free to verify yourself.
>>>
>>> I'm assuming you are using KVM as I'm doing.
>>>
>>> The space for the boot disk and the ephemeral disk should be represented
>>> as files in the physical node where the VM is hosted.
>>> In order to check that, go to directory  /var/lib/nova on the node where
>>> the VM is running.
>>> As far as I understand, this is where nova (and KVM) keep the running
>>> instances' information.
>>> You should see a directory for each running instance named as
>>> instance-xxxxxxx, where xxxxxxx uniquely identifies an instance (there are
>>> several ways for uniquely identify an instance, this is one of many... but
>>> this is a different story).
>>> Go into one of these and check what you find.
>>>
>>> For what concerns nova-scheduler, I don't know what exactly does. I'm
>>> afraid that you need to test and see what happens.
>>>
>>> A nova command can help for inspecting what a node is using, in terms of
>>> resources.
>>>
>>> At the controller node (or any other node where you installed nova
>>> client), type the following command substituting OpenStack02 with the name
>>> of the node you want to inspect:
>>>
>>> *$ nova host-describe OpenStack02*
>>>
>>>
>>> +-------------+----------------------------------+-----+-----------+---------+
>>>
>>> | HOST        | PROJECT                          | cpu | memory_mb |
>>> disk_gb |
>>>
>>>
>>> +-------------+----------------------------------+-----+-----------+---------+
>>>
>>> | OpenStack02 | (total)                          | 16  | 24101     | 90
>>> |
>>>
>>> | OpenStack02 | (used_max)                       | 13  | 7680      | 0
>>> |
>>>
>>> | OpenStack02 | (used_now)                       | 13  | 8192      | 0
>>> |
>>>
>>> | OpenStack02 | 456ec9d355ae4feebe48a2e79e703225 | 4   | 2048      | 0
>>> |
>>>
>>> | OpenStack02 | fb434e07b687494bb669fde23f497970 | 9   | 5632      | 0
>>> |
>>>
>>>
>>> +-------------+----------------------------------+-----+-----------+---------+
>>>
>>> It return a brief report of the resources currently used by a node.
>>>
>>> To my knowledge,  the dashboard does not provide a similar page, at the
>>> time being.
>>>
>>> Hope it helps,
>>> Marco.
>>>
>>>
>>>
>>>
>>> On Tue, Dec 4, 2012 at 4:40 PM, Ahmed Al-Mehdi <ahmedalmehdi at gmail.com>wrote:
>>>
>>>> Hi Marco,
>>>>
>>>> Thank you very much for the info, much much clearer.  I was looking for
>>>> the boot disk using "ls -l /dev/sd*", but the existence of /dev/vda1
>>>> should have given me a clue.
>>>>
>>>> A few follow up questions:
>>>>
>>>> - I am assuming the space for the VM boot disk is allocated from the
>>>> local hard disk of the physical host on which the VM is instantiated,
>>>> right?
>>>> - If Yes
>>>>    - How is the boot disk represented on the physical host.  Is it a
>>>> file on the local filesystem that represent the VM boot disk?
>>>>    - I am guessing there is some logic in nova-scheduler that checs
>>>> first if there is enough disk apace on the physical host for the VM(along with RAM,
>>>> VCPUs) before launching the VM on the host?
>>>>    - Is there any way to find out from Horizon how much disk space is
>>>> available on a (or each) physical host for VM boot disk allocation?
>>>>
>>>> Thank you,
>>>> Ahmed.
>>>>
>>>>
>>>>
>>>>
>>>> On Tue, Dec 4, 2012 at 12:07 AM, Marco CONSONNI <mcocmo62 at gmail.com>wrote:
>>>>
>>>>> Hello,
>>>>>
>>>>>
>>>>> When you use a flavor with an ephemeral disk size different from zero,
>>>>> the instance is booted with an extra virtual disk whose size is indicated
>>>>> by the ephemeral value (in GB).
>>>>>
>>>>> Using cirros image, try a flavor with ephemeral disk size different
>>>>> from zero (you need to create one yourself because the "standard" flavors
>>>>> have ephemeral size equal to 0), then log into the just booted instance and
>>>>> type:
>>>>>
>>>>>
>>>>> *$ ls /dev/vd**
>>>>>
>>>>> /dev/vda   /dev/vda1  /dev/vdb
>>>>>
>>>>>
>>>>>
>>>>> Disk /dev/vdb is a (virtual) disk, automatically created at boot time,
>>>>> corresponding to the ephemeral disk space indicated by the flavor . Please
>>>>> note that /dev/vda, mounted as /dev/vda1, is the boot disk, always created
>>>>> when you boot an instance.
>>>>>
>>>>> Verify the size of the available disks; more specifically, verify
>>>>> /dev/vdb:
>>>>>
>>>>> *
>>>>> *
>>>>>
>>>>> *$ sudo fdisk -l*
>>>>>
>>>>> Disk /dev/vda: 1073 MB, 1073741824 bytes
>>>>>
>>>>> 255 heads, 63 sectors/track, 130 cylinders, total 2097152 sectors
>>>>>
>>>>> Units = sectors of 1 * 512 = 512 bytes
>>>>>
>>>>> Sector size (logical/physical): 512 bytes / 512 bytes
>>>>>
>>>>> I/O size (minimum/optimal): 512 bytes / 512 bytes
>>>>>
>>>>> Disk identifier: 0x00000000
>>>>>
>>>>>
>>>>>
>>>>>    Device Boot      Start         End      Blocks   Id  System
>>>>>
>>>>> /dev/vda1   *       16065     2088449     1036192+  83  Linux
>>>>>
>>>>>
>>>>>
>>>>> Disk /dev/vdb: 1073 MB, 1073741824 bytes
>>>>>
>>>>> 16 heads, 63 sectors/track, 2080 cylinders, total 2097152 sectors
>>>>>
>>>>> Units = sectors of 1 * 512 = 512 bytes
>>>>>
>>>>> Sector size (logical/physical): 512 bytes / 512 bytes
>>>>>
>>>>> I/O size (minimum/optimal): 512 bytes / 512 bytes
>>>>>
>>>>> Disk identifier: 0x00000000
>>>>>
>>>>>
>>>>>
>>>>> Disk /dev/vdb doesn't contain a valid partition table
>>>>>
>>>>>
>>>>>
>>>>> Please note that /dev/vdb is made available as raw device, meaning
>>>>> that you need to partition and format it before using.
>>>>>
>>>>> You can find instructions on how to do that here
>>>>> http://docs.openstack.org/folsom/openstack-compute/admin/content/configure-nova-volume.html(search for command fdisk)
>>>>>
>>>>> Also note that this disk, being ephemeral, disappears when you
>>>>> terminate the VM. If you want to keep the data you produce with a VM that
>>>>> is destined to be terminated, you need to use Volumes that you explicitly
>>>>> create and attach using the services implemented by Cinder (former
>>>>> nova-volume).
>>>>>
>>>>>
>>>>> For what concerns the size you define for the boot disk, try and lunch
>>>>> two instances: one with flavor m1.tiny the other with m1.small:
>>>>>
>>>>>
>>>>> -- tiny --
>>>>>
>>>>> *$ sudo fdisk -l
>>>>> *
>>>>> Disk /dev/vda: 41 MB, 41126400 bytes
>>>>> 255 heads, 63 sectors/track, 5 cylinders, total 80325 sectors
>>>>> Units = sectors of 1 * 512 = 512 bytes
>>>>> Sector size (logical/physical): 512 bytes / 512 bytes
>>>>> I/O size (minimum/optimal): 512 bytes / 512 bytes
>>>>> Disk identifier: 0x00000000
>>>>>
>>>>>    Device Boot      Start         End      Blocks   Id  System
>>>>> /dev/vda1   *       16065       80324       32130   83  Linux
>>>>>
>>>>>
>>>>> -- small --
>>>>>
>>>>>
>>>>> *$ sudo fdisk -l
>>>>> *
>>>>> Disk /dev/vda: 21.5 GB, 21474836480 bytes
>>>>> 255 heads, 63 sectors/track, 2610 cylinders, total 41943040 sectors
>>>>> Units = sectors of 1 * 512 = 512 bytes
>>>>> Sector size (logical/physical): 512 bytes / 512 bytes
>>>>> I/O size (minimum/optimal): 512 bytes / 512 bytes
>>>>> Disk identifier: 0x00000000
>>>>>
>>>>>    Device Boot      Start         End      Blocks   Id  System
>>>>> /dev/vda1   *       16065    41929649    20956792+  83  Linux
>>>>>
>>>>>
>>>>> As you notice, the size indicated by the flavor has effects on the
>>>>> size of the boot disk.
>>>>>
>>>>>
>>>>> Hope it helps,
>>>>> Marco.
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> On Mon, Dec 3, 2012 at 7:03 PM, Ahmed Al-Mehdi <ahmedalmehdi at gmail.com
>>>>> > wrote:
>>>>>
>>>>>> Hello,
>>>>>>
>>>>>> I instantiated a VM using the cirros image, using the pre-defined
>>>>>> "m1.small" (1 VCPU, 2 GB Ram, 20G boot disk, 0 Ephemeral disk).  I then
>>>>>> logged into the console of the VM to view some system stats.  The num of
>>>>>> CPU and memory makes sense, but I am a bit confused on the storage aspect.
>>>>>>  I see the output of "df -h " as following:
>>>>>>
>>>>>> $ df -h
>>>>>> Filesystem              Size         Used            Available
>>>>>>  Use%     Mounted on
>>>>>> /dev                  1001.1M              0              1001.1M
>>>>>>     0%         /dev
>>>>>> /dev/vda1             23.2M        12.9M                   9.1M
>>>>>> 59%         /
>>>>>> tmpfs                1004.1M              0              1004.1M
>>>>>>     0%         /dev/shm
>>>>>> tmpfs                  200.0K        20.0K                180.0K
>>>>>>   10%         /run
>>>>>>
>>>>>>
>>>>>> What is the difference between Boot disk and Ephemeral disk?
>>>>>>
>>>>>> How can I correlate the 20G boot disk to the output of "df -h".
>>>>>>
>>>>>>
>>>>>> Regards,
>>>>>> Ahmed.
>>>>>>
>>>>>>
>>>>>> _______________________________________________
>>>>>> Mailing list: https://launchpad.net/~openstack
>>>>>> Post to     : openstack at lists.launchpad.net
>>>>>> Unsubscribe : https://launchpad.net/~openstack
>>>>>> More help   : https://help.launchpad.net/ListHelp
>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20121205/ee77b06d/attachment.html>


More information about the Openstack mailing list