[Openstack] Openstack Digest, Vol 3, Issue 42

Jagdish Choudhary jagdish1287 at gmail.com
Sun Sep 29 12:56:41 UTC 2013


Hi All,

I'm always getting confuse in between ephemeral disk and non-ephemeral
disk, can you pl clarify these terms ?


On Sun, Sep 29, 2013 at 5:30 PM, <openstack-request at lists.openstack.org>wrote:

>  [image: Boxbe] <https://www.boxbe.com/overview>
> openstack at lists.openstack.org is not on your Guest List<https://www.boxbe.com/approved-list?tc_serial=15266935401&tc_rand=685548230&utm_source=stf&utm_medium=email&utm_campaign=ANNO_MWTP&utm_content=001&token=E9kDrU5etrFx554H%2FRuWVEJ8N%2Fhj08GoX9W0SZs6%2BUUchl4IITxhxu8RIKLQUpU5&key=ghQLE3dOtmUydEDWVq0kBEfCd%2FMpc9qpApYHFZVEEYM%3D>| Approve
> sender<https://www.boxbe.com/anno?tc_serial=15266935401&tc_rand=685548230&utm_source=stf&utm_medium=email&utm_campaign=ANNO_MWTP&utm_content=001&token=E9kDrU5etrFx554H%2FRuWVEJ8N%2Fhj08GoX9W0SZs6%2BUUchl4IITxhxu8RIKLQUpU5&key=ghQLE3dOtmUydEDWVq0kBEfCd%2FMpc9qpApYHFZVEEYM%3D>| Approve
> domain<https://www.boxbe.com/anno?tc_serial=15266935401&tc_rand=685548230&utm_source=stf&utm_medium=email&utm_campaign=ANNO_MWTP&utm_content=001&dom&token=E9kDrU5etrFx554H%2FRuWVEJ8N%2Fhj08GoX9W0SZs6%2BUUchl4IITxhxu8RIKLQUpU5&key=ghQLE3dOtmUydEDWVq0kBEfCd%2FMpc9qpApYHFZVEEYM%3D>
>
> Send Openstack mailing list submissions to
>         openstack at lists.openstack.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
>         http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> or, via email, send a message with subject or body 'help' to
>         openstack-request at lists.openstack.org
>
> You can reach the person managing the list at
>         openstack-owner at lists.openstack.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of Openstack digest..."
>
>
> Today's Topics:
>
>    1. Re: Flavor files and disk and ephemeral sizes (Shake Chen)
>    2. Re: [nova in havana]: vm status update delay when vm process
>       is killed... (Jian Wen)
>    3. Re: the way of managing the shared block storage. RE:
>       Announcing Manila Project (Shared Filesystems     Management)
> (Qixiaozhen)
>    4. I can't access instances by ssh with key (Ma Lei)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Sun, 29 Sep 2013 01:44:22 +0800
> From: Shake Chen <shake.chen at gmail.com>
> To: Steven Gessner <gessners at us.ibm.com>
> Cc: "openstack at lists.openstack.org" <openstack at lists.openstack.org>
> Subject: Re: [Openstack] Flavor files and disk and ephemeral sizes
> Message-ID:
>         <
> CAO__-NaGSB5gWY2u1_S9__-odM80tKkNgJOYCXNYXqGQj7Fe0w at mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> I also have  question about Flavor and  ephemeral disk.
>
> if I choose from ISO install OS , the ephemeral disk would not allow zero
> size.
>
> whether can modify the default setting of the ephemeral disk size in
> flavor.
>
>
> On Sat, Sep 28, 2013 at 6:11 AM, Steven Gessner <gessners at us.ibm.com>
> wrote:
>
> > Hi, I hope someone can clarify the questions I have.
> >
> > When I look at the documentation on flavor files:
> >
> http://docs.openstack.org/grizzly/openstack-compute/admin/content/instance-building-blocks.html#instance-building-blocks-flavors
> ,
> > I see the virtual root disk size shown for the m1.small to m1.xlarge
> > flavors remaining constant at 10gig and the ephemeral disk ranging from
> 20
> > to 160gig.  When I look at an OpenStack environment built by a friend, I
> > see the ephemeral size always at 0gig and the virtual root disk size
> > containing the values 20 to 160gig.
> >
> > I know an admin can modify the values but my friend says he didn't.  In
> > addition, I would have expected the virtual root disk to remain constant
> > for the flavors in order to allow me to easily choose different size
> > virtual machines to hold a Linux image.  Is the documentation showing the
> > flavors wrong or has my friend someone incorrectly and unknowingly
> swapped
> > the values in the flavor for virtual root disk size and ephemeral size?
> >
> > Thank you for your time,
> > Steve Gessner
> >
> > _______________________________________________
> > Mailing list:
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> > Post to     : openstack at lists.openstack.org
> > Unsubscribe :
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> >
> >
>
>
> --
> Shake Chen
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> http://lists.openstack.org/pipermail/openstack/attachments/20130929/c2d55a11/attachment-0001.html
> >
>
> ------------------------------
>
> Message: 2
> Date: Sun, 29 Sep 2013 14:06:20 +0800
> From: Jian Wen <jian.wen at canonical.com>
> To: Juha Tynninen <tyky72 at gmail.com>
> Cc: "openstack at lists.openstack.org" <openstack at lists.openstack.org>
> Subject: Re: [Openstack] [nova in havana]: vm status update delay when
>         vm process is killed...
> Message-ID:
>         <CAMunoaOAWYUDb=
> f0VkTVtvfLi5ZTQtD7x6daretCHMpzMtrVsg at mail.gmail.com>
> Content-Type: text/plain; charset="iso-8859-1"
>
> Hello, Juha
>
> I can't reproduce this bug in a devstack environment freshly installed
> today.
>
> Instance's power state should be synced in a very short delay window since
> the following bp.
> https://blueprints.launchpad.net/nova/+spec/compute-driver-events
>
> What is the exact version of nova are you using?
> Could you file a bug?
>
>
>
>
>
>
>
>
>
> On Thu, Sep 19, 2013 at 9:04 PM, Juha Tynninen <tyky72 at gmail.com> wrote:
>
> > Hi,
> >
> > Ok, seems to be I can eliminate the delay by editing the value for the
> > nova/compute/manager.py related configuration item in nova.conf:
> >
> > cfg.IntOpt('sync_power_state_interval',
> >                default=600,
> >                help='interval to sync power states between '
> >                     'the database and the hypervisor'),
> >
> > In the grizzly code there is also this default 600 seconds interval
> > defined for this check, but still there wasn't this kind of delay
> > present... I wonder what has changed.
> >
> > How do you see, is there some negative effects to set the
> > sync_power_state_interval very low in havana (e.g. 1s or even lower)...?
> >
> > Thanks,
> > -Juha
> >
> >
> >
> > On 18 September 2013 12:41, Juha Tynninen <tyky72 at gmail.com> wrote:
> >
> >> Hi,
> >>
> >> Previously in grizzly if I killed the VM process (in case of devstack
> the
> >> relevant qemu-system process) nova
> >> almost immediately noticed this and the status of VM was updated to
> >> SHUTOFF.
> >>
> >> But now in havana it takes about 5 minutes the VM status to change to
> >> SHUTOFF.
> >> Anyone knows what causes this delay?
> >>
> >> Thanks,
> >> -Juha
> >>
> >>
> >>
> >
> > _______________________________________________
> > Mailing list:
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> > Post to     : openstack at lists.openstack.org
> > Unsubscribe :
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> >
> >
>
>
> --
> Cheers,
> Jian
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> http://lists.openstack.org/pipermail/openstack/attachments/20130929/ac6e5be2/attachment-0001.html
> >
>
> ------------------------------
>
> Message: 3
> Date: Sun, 29 Sep 2013 08:08:31 +0000
> From: Qixiaozhen <qixiaozhen at huawei.com>
> To: Caitlin Bestler <caitlin.bestler at nexenta.com>
> Cc: "openstack at lists.openstack.org" <openstack at lists.openstack.org>
> Subject: Re: [Openstack] the way of managing the shared block storage.
>         RE: Announcing Manila Project (Shared Filesystems       Management)
> Message-ID:
>         <
> F3FF02E5B1A52F44A89CA6A3523CE557718263E3 at szxema507-mbx.china.huawei.com>
>
> Content-Type: text/plain; charset="us-ascii"
>
>
> Is there any plan in openstack for managing the shared block storage using
> only the data plane? Just like the VMFS for VMware and "SPM+LVM2" for Ovirt.
>
> If the management plane of the SAN was unreachable, how about openstack
> for this?
>
>
>
> Qi Xiaozhen
> CLOUD OS PDU, IT Product Line, Huawei Enterprise Business Group
> Mobile: +86 13609283376    Tel: +86 29-89191578
> Email: qixiaozhen at huawei.com
> enterprise.huawei.com
>
> -----Original Message-----
> From: Caitlin Bestler [mailto:caitlin.bestler at nexenta.com]
> Sent: Saturday, September 28, 2013 12:08 AM
> To: Qixiaozhen
> Cc: openstack at lists.openstack.org
> Subject: Re: the way of managing the shared block storage. RE: [Openstack]
> Announcing Manila Project (Shared Filesystems Management)
>
> On 9/26/2013 7:09 PM, Qixiaozhen wrote:
> > Hi, all
> >
> > Is there a common way to manage the block storage of an unknown vendor
> san?
> >
> > For example, a linux server shares its local disks by the target
> software(iscsitarget, lio and etc.). The computing nodes are connected to
> the target with iscsi session, and the LUNs are already rescaned.
> >
> > VMFS is introduced in VMware to manage the LUNs shared by the san. Ovirt
> VDSM organize the metadata of the volumes in the LUN with LVM2 and
> StoragePoolManager. How about openstack?
> >
> > Best regards,
> >
>
> The standard protocols (iSCSI, NFS, CIFS, etc.) generally only address
> the user plane and partially the control plane. Standardizing the
> management plane is left to the user or vendors. One of the roles
> of OpenStack is to fill that gap.
>
> Cinder addresses block storage.
> The proposed Manila project would deal with NAS.
>
>
>
>
>
> ------------------------------
>
> Message: 4
> Date: Sun, 29 Sep 2013 17:07:15 +0800
> From: "Ma Lei" <malei at cn.fujitsu.com>
> To: <openstack at lists.openstack.org>
> Subject: [Openstack] I can't access instances by ssh with key
> Message-ID: <000601cebcf3$4a897400$df9c5c00$@fujitsu.com>
> Content-Type: text/plain; charset="us-ascii"
>
> I have created three instances using ubuntu's official image on my
> OpenStack
> controller node.
>
> Two of them have run for more than three weeks.
>
> The third have run for two weeks.
>
> In the past weeks I used were normal.
>
>
>
> Suddenly, all of them can't be accessed by ssh with key.
>
> And when I ping them, reply "Request time out".
>
> On the dashboard, the status is active, and the power state is running.
>
>
>
> The instance console log is:
>
> cloudinit startlocal running: Fri, 27 Sep 2013 05:30:33 +0000. up 5.54
> seconds
>
> no instance data found in startlocal
>
> ciinfo: lo    : 1 127.0.0.1       255.0.0.0       .
>
> ciinfo: eth0  : 1 10.167.35.142   255.255.255.0   fa:16:3e:31:83:9c
>
> ciinfo: route0: 0.0.0.0         10.167.35.1     0.0.0.0         eth0   UG
>
> ciinfo: route1: 10.167.35.0     0.0.0.0         255.255.255.0   eth0   U
>
> ciinfo: route2: 169.254.169.254 10.167.35.141   255.255.255.255 eth0   UGH
>
> cloudinit start running: Fri, 27 Sep 2013 05:30:34 +0000. up 6.09 seconds
>
> found data source: DataSourceEc2
>
> 20130927 05:30:47,626  __init__.py[WARNING]: Unhandled nonmultipart
> userdata
> ''
>
> Skipping profile in /etc/apparmor.d/disable: usr.sbin.rsyslogd
>
>  * Starting AppArmor profiles       [80G [74G[ OK ]
>
> landscapeclient is not configured, please run landscapeconfig.
>
>  * Stopping System V initialisation compatibility[74G[ OK ]
>
>  * Starting System V runlevel compatibility[74G[ OK ]
>
>  * Starting save kernel messages[74G[ OK ]
>
> acpid: starting up with proc fs
>
>  * Starting ACPI daemon[74G[ OK ]
>
>  * Starting automatic crash report generation[74G[ OK ]
>
>  * Starting regular background program processing daemon[74G[ OK ]
>
>  * Starting deferred execution scheduler[74G[ OK ]
>
>  * Stopping save kernel messages[74G[ OK ]
>
>  * Starting CPU interrupts balancing daemon[74G[ OK ]
>
> acpid: 1 rule loaded
>
> acpid: waiting for events: event logging is off
>
>  * Starting MySQL Server[74G[[31mfail[39;49m]
>
>  * Starting crash report submission daemon[74G[ OK ]
>
>  * Starting system logging daemon[74G[ OK ]
>
>  * Starting Handle applying cloudconfig[74G[ OK ]
>
>  * Stopping Handle applying cloudconfig[74G[ OK ]
>
> apache2: apr_sockaddr_info_get() failed for webserver
>
> apache2: Could not reliably determine the server's fully qualified domain
> name, using 127.0.0.1 for ServerName
>
>  * Starting web server apache2       [80G [74G[ OK ]
>
>  * Stopping System V runlevel compatibility[74G[ OK ]
>
>  * Starting execute cloud user/final scripts[74G[ OK ]
>
> cloudinit boot finished at Fri, 27 Sep 2013 05:30:50 +0000. Up 22.48
> seconds
>
> [ 3601.676137] INFO: task jbd2/vdc18:1980 blocked for more than 120
> seconds.
>
> [ 3602.155585] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables
> this message.
>
> [ 3722.759789] INFO: task jbd2/vdc18:1980 blocked for more than 120
> seconds.
>
> [ 3722.760682] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables
> this message.
>
> [ 3842.760065] INFO: task jbd2/vdc18:1980 blocked for more than 120
> seconds.
>
> [ 3842.760815] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables
> this message.
>
> [ 3962.760093] INFO: task jbd2/vdc18:1980 blocked for more than 120
> seconds.
>
> [ 3962.760839] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables
> this message.
>
> [ 4082.760060] INFO: task jbd2/vdc18:1980 blocked for more than 120
> seconds.
>
> [ 4082.760932] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables
> this message.
>
> [ 4202.760097] INFO: task jbd2/vdc18:1980 blocked for more than 120
> seconds.
>
> [ 4202.760985] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables
> this message.
>
> [ 4322.760058] INFO: task jbd2/vdc18:1980 blocked for more than 120
> seconds.
>
> [ 4322.760936] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables
> this message.
>
> [ 4442.760085] INFO: task jbd2/vdc18:1980 blocked for more than 120
> seconds.
>
> [ 4442.761002] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables
> this message.
>
> [ 4682.760064] INFO: task jbd2/vdc18:1980 blocked for more than 120
> seconds.
>
> [ 4682.761029] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables
> this message.
>
> [ 5762.760082] INFO: task jbd2/vdc18:1980 blocked for more than 120
> seconds.
>
> [ 5762.761044] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables
> this message.
>
>
>
> Thanks.
>
>
>
> malei
>
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> http://lists.openstack.org/pipermail/openstack/attachments/20130929/1976a68d/attachment-0001.html
> >
>
> ------------------------------
>
> _______________________________________________
> Openstack mailing list
> openstack at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
>
> End of Openstack Digest, Vol 3, Issue 42
> ****************************************
>
>


-- 
Thanks and Regards,
Jagdish Choudhary
IBM India Pvt Ltd, Bangalore
M.No-8971011661
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20130929/485ae700/attachment.html>


More information about the Openstack mailing list