<html><head><meta http-equiv="Content-Type" content="text/html; charset=us-ascii"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;" class=""><br class=""><div><br class=""><blockquote type="cite" class=""><div class="">On 1 Sep 2020, at 17:15, Artom Lifshitz <<a href="mailto:alifshit@redhat.com" class="">alifshit@redhat.com</a>> wrote:</div><br class="Apple-interchange-newline"><div class=""><div class="">IIUC one of our (Red Hat's) customer-facing folks brought us a similar<br class="">question recently. In their case they wanted to use PCI passthrough to<br class="">pass an NVMe disk to an instance. This is technically possible, but<br class="">there would be major privacy concerns in a multi-tenant cloud, as Nova<br class="">currently has no way of cleaning up a disk after a VM has left it, so<br class="">either the guest OS would have to do it itself, or any subsequent VM<br class="">using that disk would have access to all of the previous VM's data<br class="">(this could be mitigated by full-disk encryption, though). Cleaning up<br class="">disks after VM's would probably fall more within Cybor's scope...<br class=""><br class="">There's also the question of instance move operations like live and<br class="">cold migrations - what happens to the passed-through disk in those<br class="">cases? Does Nova have to copy it to the destination? I think those<br class="">would be fairly easily addressable though (there are no major<br class="">technical or political challenges, it's just a matter of someone<br class="">writing the code and reviewing).<br class=""><br class="">The disk cleanup thing is going to be harder, I suspect - more<br class="">politically than technically. It's a bit of a chicken and egg problem<br class="">with Nova and Cyborg, at the moment. Nova can refuse features as being<br class="">out of scope and punt them to Cyborg, but I'm not sure how<br class="">production-ready Cyborg is...<br class=""><br class=""></div></div></blockquote><div><br class=""></div>Does the LVM pass through option help for direct attach of a local disk ?</div><div><br class=""></div><div><a href="https://cloudnull.io/2017/12/nova-lvm-an-iop-love-story/" class="">https://cloudnull.io/2017/12/nova-lvm-an-iop-love-story/</a></div><div><br class=""></div><div>Tim<br class=""><blockquote type="cite" class=""><div class=""><div class="">On Tue, Sep 1, 2020 at 10:41 AM Thomas Goirand <<a href="mailto:zigo@debian.org" class="">zigo@debian.org</a>> wrote:<br class=""><blockquote type="cite" class=""><br class="">Hi Nova team!<br class=""><br class="">tl;dr: we would like to contribute giving instances access to physical<br class="">block devices directly on the compute hosts. Would this be accepted?<br class=""><br class="">Longer version:<br class=""><br class="">About 3 or 4 years ago, someone wrote a spec, so we'd be able to provide<br class="">a local disk of a compute, directly to a VM to use. This was then<br class="">rejected, because at the time, Cinder had the blockdevice drive, which<br class="">was more or less achieving the same thing. Unfortunately, because nobody<br class="">was maintaining the blockdevice driver in Cinder, and because there was<br class="">no CI that could test it, the driver got removed.<br class=""><br class="">We've investigated how we could otherwise implement it, and one solution<br class="">would be to use Cinder, but then we'd be going through an iSCSI export,<br class="">which would drastically reduce performances.<br class=""><br class="">Another solution would be to manage KVM instances by hand, not touching<br class="">anything to libvirt and/or OpenVSwitch, but then we would loose the ease<br class="">of using the Nova API, so we would prefer to avoid this direction.<br class=""><br class="">So we (ie: employees in my company) need to ask the Nova team: would you<br class="">consider a spec to do what was rejected before, since there's now no<br class="">other good enough alternative?<br class=""><br class="">Our current goal is to be able to provide a disk directly to a VM, so<br class="">that we could build Ceph clusters with an hyper-converged model (ie:<br class="">storage hosted on the compute nodes). In this model, we wouldn't need<br class="">live-migration of a VM with an attached physical block device (though<br class="">the feature could be added on a later stage).<br class=""><br class="">Before we start investigating how this can be done, I need to know if<br class="">this has at least some chances to be accepted or not. If there is, then<br class="">we'll probably start an experimental patch locally, then write a spec to<br class="">properly start this project. So please let us know.<br class=""><br class="">Cheers,<br class=""><br class="">Thomas Goirand (zigo)<br class=""><br class=""></blockquote><br class=""><br class=""></div></div></blockquote></div><br class=""></body></html>