[openstack-dev] [Nova] z/VM introducing a new config driveformat

Chen CH Ji jichenjc at cn.ibm.com
Mon Apr 16 06:56:06 UTC 2018


Thanks for the question and comments

>>> metadata service question
Fully agree metadata is something that we need support, and as it need some
network setup on NAT ,as you pointed out, without metadata there are some
functions missing ; so it's already in our support plan , currently we plan
to use config drive and later (with the enhance with our neutron support as
well) to support metadata service

>>>The "iso file" will not be inside the guest, but rather passed to the
guest as a block device, right?
Cloud init  expects to find a config drive with following requirements [1],
in order to make cloud init able to consume config drive , we should be
able to prepare it,
in some hypervisor, you can define something like following to the VM then
VM startup is able to consume it
<source file="/var/log/cloud/new/abc.iso"/>
but for z/VM case it allows disk to be created during VM create
(define )stage but no disk format set, it's the operating system's
responsibility to define the purpose of the
disk, so what we do is
1) first when we build image ,we create a small AE like cloud-init but only
purpose is to get files from z/VM internal pipe and handle config drive
case
2) During spawn we create config drive in nova-compute side then send the
file to z/VM through z/VM internal pipe (omit detail here)
3) During startup of the virtual machine, the small AE is able to mount the
file as loop device and then in turn cloud-init is able to handle it

because this is our special case, we don't want to upload to cloud-init
community because of uniqueness and as far as we can tell, no hook in
cloud-init mechanism allowed as well
to let us 'mount -o loop' ; also, from openstack point of view except this
small AE (which is documented well) no special thing and inconsistent to
other drivers

[1]
https://github.com/number5/cloud-init/blob/master/cloudinit/sources/DataSourceConfigDrive.py#L225

Best Regards!

Kevin (Chen) Ji 纪 晨

Engineer, zVM Development, CSTL
Notes: Chen CH Ji/China/IBM at IBMCN   Internet: jichenjc at cn.ibm.com
Phone: +86-10-82451493
Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District,
Beijing 100193, PRC



From:	Dan Smith <dms at danplanet.com>
To:	"Chen CH Ji" <jichenjc at cn.ibm.com>
Cc:	"OpenStack Development Mailing List \(not for usage questions
            \)" <openstack-dev at lists.openstack.org>
Date:	04/13/2018 09:46 PM
Subject:	Re: [openstack-dev] [Nova] z/VM introducing a new config
            driveformat



> for the run_validation=False issue, you are right, because z/VM driver
> only support config drive and don't support metadata service ,we made
> bad assumption and took wrong action to disabled the whole ssh check,
> actually according to [1] , we should only disable
> CONF.compute_feature_enabled.metadata_service but keep both
> self.run_ssh and CONF.compute_feature_enabled.config_drive as True in
> order to make config drive test validation take effect, our CI will
> handle that

Why don't you support the metadata service? That's a pretty fundamental
mechanism for nova and openstack. It's the only way you can get a live
copy of metadata, and it's the only way you can get access to device
tags when you hot-attach something. Personally, I think that it's
something that needs to work.

> For the tgz/iso9660 question below, this is because we got wrong info
> from low layer component folks back to 2012 and after discuss with
> some experts again, actually we can create iso9660 in the driver layer
> and pass down to the spawned virtual machine and during startup
> process, the VM itself will mount the iso file and consume it, because
> from linux perspective, either tgz or iso9660 doesn't matter , only
> need some files in order to transfer the information from openstack
> compute node to the spawned VM.  so our action is to change the format
> from tgz to iso9660 and keep consistent to other drivers.

The "iso file" will not be inside the guest, but rather passed to the
guest as a block device, right?

> For the config drive working mechanism question, according to [2] z/VM
> is Type 1 hypervisor while Qemu/KVM are mostly likely to be Type 2
> hypervisor, there is no file system in z/VM hypervisor (I omit too
> much detail here) , so we can't do something like linux operation
> system to keep a file as qcow2 image in the host operating system,

I'm not sure what the type-1-ness has to do with this. The hypervisor
doesn't need to support any specific filesystem for this to work. Many
drivers we have in the tree are type-1 (xen, vmware, hyperv, powervm)
and you can argue that KVM is type-1-ish. They support configdrive.

> what we do is use a special file pool to store the config drive and
> during VM init process, we read that file from special device and
> attach to VM as iso9660 format then cloud-init will handle the follow
> up, the cloud-init handle process is identical to other platform

This and the previous mention of this sort of behavior has me
concerned. Are you describing some sort of process that runs when the
instance is starting to initialize its environment, or something that
runs  *inside* the instance and thus functionality that has to exist in
the *image* to work?

--Dan



-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20180416/03fc55f1/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: graycol.gif
Type: image/gif
Size: 105 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20180416/03fc55f1/attachment.gif>


More information about the OpenStack-dev mailing list