<html><body><p><font size="2">Thanks for the question and comments</font><br><br><font size="2">>>> metadata service question</font><br><font size="2">Fully agree metadata is something that we need support, and as it need some network setup on NAT ,as you pointed out, without metadata there are some</font><br><font size="2">functions missing ; so it's already in our support plan , currently we plan to use config drive and later (with the enhance with our neutron support as well) to support metadata service</font><br><br><font size="2">>>>The "iso file" will not be inside the guest, but rather passed to the guest as a block device, right?</font><br><font size="2">Cloud init expects to find a config drive with following requirements [1], in order to make cloud init able to consume config drive , we should be able to prepare it,</font><br><font size="2">in some hypervisor, you can define something like following to the VM then VM startup is able to consume it </font><br><font size="2"><source file="/var/log/cloud/new/abc.iso"/> </font><br><font size="2">but for z/VM case it allows disk to be created during VM create (define )stage but no disk format set, it's the operating system's responsibility to define the purpose of the </font><br><font size="2">disk, so what we do is </font><br><font size="2">1) first when we build image ,we create a small AE like cloud-init but only purpose is to get files from z/VM internal pipe and handle config drive case</font><br><font size="2">2) During spawn we create config drive in nova-compute side then send the file to z/VM through z/VM internal pipe (omit detail here) </font><br><font size="2">3) During startup of the virtual machine, the small AE is able to mount the file as loop device and then in turn cloud-init is able to handle it </font><br><br><font size="2">because this is our special case, we don't want to upload to cloud-init community because of uniqueness and as far as we can tell, no hook in cloud-init mechanism allowed as well</font><br><font size="2">to let us 'mount -o loop' ; also, from openstack point of view except this small AE (which is documented well) no special thing and inconsistent to other drivers</font><br><br><font size="2">[1]</font><a href="https://github.com/number5/cloud-init/blob/master/cloudinit/sources/DataSourceConfigDrive.py#L225"><font size="2">https://github.com/number5/cloud-init/blob/master/cloudinit/sources/DataSourceConfigDrive.py#L225</font></a><br><br><font size="2">Best Regards! <br><br>Kevin (Chen) Ji ¼Í ³¿<br><br>Engineer, zVM Development, CSTL<br>Notes: Chen CH Ji/China/IBM@IBMCN Internet: jichenjc@cn.ibm.com<br>Phone: +86-10-82451493<br>Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District, Beijing 100193, PRC </font><br><br><img width="16" height="16" src="cid:1__=8FBB08E2DFB00D4B8f9e8a93df938690918c8FB@" border="0" alt="Inactive hide details for Dan Smith ---04/13/2018 09:46:29 PM---> for the run_validation=False issue, you are right, because z/"><font size="2" color="#424282">Dan Smith ---04/13/2018 09:46:29 PM---> for the run_validation=False issue, you are right, because z/VM driver > only support config drive</font><br><br><font size="2" color="#5F5F5F">From: </font><font size="2">Dan Smith <dms@danplanet.com></font><br><font size="2" color="#5F5F5F">To: </font><font size="2">"Chen CH Ji" <jichenjc@cn.ibm.com></font><br><font size="2" color="#5F5F5F">Cc: </font><font size="2">"OpenStack Development Mailing List \(not for usage questions\)" <openstack-dev@lists.openstack.org></font><br><font size="2" color="#5F5F5F">Date: </font><font size="2">04/13/2018 09:46 PM</font><br><font size="2" color="#5F5F5F">Subject: </font><font size="2">Re: [openstack-dev] [Nova] z/VM introducing a new config driveformat</font><br><hr width="100%" size="2" align="left" noshade style="color:#8091A5; "><br><br><br><tt><font size="2">> for the run_validation=False issue, you are right, because z/VM driver<br>> only support config drive and don't support metadata service ,we made<br>> bad assumption and took wrong action to disabled the whole ssh check,<br>> actually according to [1] , we should only disable<br>> CONF.compute_feature_enabled.metadata_service but keep both<br>> self.run_ssh and CONF.compute_feature_enabled.config_drive as True in<br>> order to make config drive test validation take effect, our CI will<br>> handle that<br><br>Why don't you support the metadata service? That's a pretty fundamental<br>mechanism for nova and openstack. It's the only way you can get a live<br>copy of metadata, and it's the only way you can get access to device<br>tags when you hot-attach something. Personally, I think that it's<br>something that needs to work.<br><br>> For the tgz/iso9660 question below, this is because we got wrong info<br>> from low layer component folks back to 2012 and after discuss with<br>> some experts again, actually we can create iso9660 in the driver layer<br>> and pass down to the spawned virtual machine and during startup<br>> process, the VM itself will mount the iso file and consume it, because<br>> from linux perspective, either tgz or iso9660 doesn't matter , only<br>> need some files in order to transfer the information from openstack<br>> compute node to the spawned VM. so our action is to change the format<br>> from tgz to iso9660 and keep consistent to other drivers.<br><br>The "iso file" will not be inside the guest, but rather passed to the<br>guest as a block device, right?<br><br>> For the config drive working mechanism question, according to [2] z/VM<br>> is Type 1 hypervisor while Qemu/KVM are mostly likely to be Type 2<br>> hypervisor, there is no file system in z/VM hypervisor (I omit too<br>> much detail here) , so we can't do something like linux operation<br>> system to keep a file as qcow2 image in the host operating system,<br><br>I'm not sure what the type-1-ness has to do with this. The hypervisor<br>doesn't need to support any specific filesystem for this to work. Many<br>drivers we have in the tree are type-1 (xen, vmware, hyperv, powervm)<br>and you can argue that KVM is type-1-ish. They support configdrive.<br><br>> what we do is use a special file pool to store the config drive and<br>> during VM init process, we read that file from special device and<br>> attach to VM as iso9660 format then cloud-init will handle the follow<br>> up, the cloud-init handle process is identical to other platform<br><br>This and the previous mention of this sort of behavior has me<br>concerned. Are you describing some sort of process that runs when the<br>instance is starting to initialize its environment, or something that<br>runs *inside* the instance and thus functionality that has to exist in<br>the *image* to work?<br><br>--Dan<br><br></font></tt><br><br><BR>
</body></html>