so the simple truth is today there is no way to achive your goal in a supproted way. the closest you can get is to abuse the pci passthough functionality if you have an nvme ssd and you want to assign that to a guest you could use pci passthoug to do that however there would be no multi tenancy supprot in that case i.e. nothign will erase the disk when the vm is deleted and nothign will copy the data if the vm is moved. the intel persitent memory feature would have also been good for your use case if intel and micron did not abandon the technoloagey, that had support for direct memroy passthough fo persitent memroy form the host to a guest which shoudl be used to have a high performance persitned data store that is local and will be migrated and deleted as appropriate. you can modify the guest xml using hooks or other means but if you do so then the instnace is nolonger supported upstream and likely downstream. i.e. if you have any bug and you cant repoduce it without yoru hooks then we would not fix them. and form a downstrema supprort poitn of view, modifying a nova instance in any way directly via libvirt generally make the vm unsupprorted. so if you have a vendor you have a support contract with, you should check with them before takeing an out of band approch. what i woudl recommend is bring this usecase to the next ptg in april and workign with the comuntiy to develop a spec and adding supprot to nova to do this properly. to be clear i am pretty open to have a way to provide a local sshd or part for one to a guest, with local peristance , data transfer on migration and data erasure on instnace delete. we have a generic resouces table in the db that was added for intel persitent memory with the intention that we could track an allocate future host resouce without need any more db changes. we could add tracking of host block devices or even the abltiy to allcoate folders (via virtio fs) or lvm volumes dirctly mapped to the guest. this is a non trivial amount of work however, for refrence this is what we did for persitent memroy https://specs.openstack.org/openstack/nova-specs/specs/train/implemented/vir... https://specs.openstack.org/openstack/nova-specs/specs/xena/approved/allow-m... adding managment of assingable block devices would be simialr althoug all the generic infrastucure from the pmem feature si already developed so its would be less work in general. the main stubleing block that would have to be designed and agreed is would this addtional local persitnet stoage be requested via the flavor or dynamically attached via a new rest api. pmem was requested via the flavor. personally i think the dynmic approch woudl be more desireable but the flavor based approach is certainly simpler. it would be nice to just be ablle to have a local block device attach/detach api like how we attach/detach cinder volumes but just using local storage. adding a new api to nova however is a lot of work because we cant ealislly remove it so we are very careful when doing that. im sorry i cant just recommend how to achive your goal with what we have today. one thing you could try is looking at alternitive images_type drivers. if using raw/qcow iamges does not have the performce you need the lvm driver historically has out performed both qcow and raw in write intenstive workloads. that nova images_type dirver however is less well maintained then the others so the likely hood of you encounterign bugs is higher. changint the nova storage driver also wont allow you to dynmically add addtional stoage to a vm without resizeing to a diffent flavor so its missing the dynmaic aspect but if your workloads are static then its an option. On Wed, 2024-01-24 at 09:09 +0000, Karl Kloppenborg wrote:
I suspect you’ll need to provide mounted ephemeral disk space by way of configuring nova/libvirt.
But I don’t think that’s going to be much help here as I think they’re wanting persistence?
Get Outlook for iOS<https://aka.ms/o0ukef> ________________________________ From: 龚永生 <gong.yongsheng@99cloud.net> Sent: Wednesday, January 24, 2024 2:35:37 PM To: Sang Tran Quoc <SangTQ8@fpt.com> Cc: Karl Kloppenborg <kkloppenborg@resetdata.com.au>; smooney@redhat.com <smooney@redhat.com>; openstack-discuss@lists.openstack.org <openstack-discuss@lists.openstack.org>; Cuong Truong Tran Quoc <CuongTTQ@fpt.com> Subject: Re:RE: Mapping local SSD to virtual machine persistently
How about ephemeral storage with the ssd?
龚永生 [X] 浙江九州未来信息科技有限公司 99CLOUD Co. Ltd. 邮箱(Email):gong.yongsheng@99cloud.net<mailto:Li.Kai@99cloud.net> 地址:海淀区上地六街研华大厦5层南侧 手机(Mobile):+86-18618199879 公司网址(WebSite):http://99cloud.net<http://www.99cloud.net/>
发件人:Sang Tran Quoc <SangTQ8@fpt.com> 发送日期:2024-01-24 11:06:28 收件人:Karl Kloppenborg <kkloppenborg@resetdata.com.au>,"smooney@redhat.com" <smooney@redhat.com>,"openstack-discuss@lists.openstack.org" <openstack-discuss@lists.openstack.org> 抄送人:Cuong Truong Tran Quoc <CuongTTQ@fpt.com> 主题:RE: Mapping local SSD to virtual machine persistently
Hi @smooney@redhat.com<mailto:smooney@redhat.com> , and @Karl Kloppenborg<mailto:kkloppenborg@resetdata.com.au>
Thanks for your helps. I also have questions regarding to your suggestion as below:
@smooney@redhat.com<mailto:smooney@redhat.com> Did you mention about cinder-volume with LVM driver? I tested it before, but the performance does not meet the requirement cause of the transportation of data via iSCSI protocol and it hard to compare with direct attach via libvirt. Your second idea are using flavor with ephemeral disk, right?
@Karl Kloppenborg<mailto:kkloppenborg@resetdata.com.au> Your workaround suggestion looks good to me; I will search around for the accurate RabbitMQ event as you mentioned.
Best Regards
Sang.
From: Karl Kloppenborg <kkloppenborg@resetdata.com.au> Sent: Wednesday, January 24, 2024 6:45 AM To: smooney@redhat.com; Sang Tran Quoc <SangTQ8@fpt.com>; openstack-discuss@lists.openstack.org Subject: Re: Mapping local SSD to virtual machine persistently
Hey guys,
One suggestion which might help you as a work around:
Libvirt regenerates the xml’s based on nova receiving events via the RabbitMQ event bus.
One option (which will require a bit of programming on your part) is to listen to the events that might impact the libvirt generation and execute the commands on your behalf.
This would necessitate knowing/storing that VM->Device mapping somewhere, i.e a database.
Thanks, Karl.
From: smooney@redhat.com<mailto:smooney@redhat.com> <smooney@redhat.com<mailto:smooney@redhat.com>> Date: Tuesday, 23 January 2024 at 10:04 pm To: Sang Tran Quoc <SangTQ8@fpt.com<mailto:SangTQ8@fpt.com>>, openstack-discuss@lists.openstack.org<mailto:openstack-discuss@lists.openstack.org> <openstack-discuss@lists.openstack.org<mailto:openstack-discuss@lists.openstack.org>> Subject: Re: Mapping local SSD to virtual machine persistently
On Tue, 2024-01-23 at 10:30 +0000, Sang Tran Quoc wrote:
Hello community,
I've got a question related to the raw device mapping to the OpenStack virtual machine. The situation is I have several SSDs along with my compute node and I would like to attach these SSDs to my virtual machine for enhancing the disk performance. My current solution is using command "virsh attach-disk <dom> <host-device> <guest-device>" but it not persistently and need to manual re-attach after hard-rebooting which lead to re-generate the xml vm structure. So, I wonder if there is an official solution for my problems. Thank you and welcome for every help in our community.
This is currenly not supported in openstack.
we have discussed adding local storage functionality to nova in the past but its non triavial since we woudl need a new rest api to allow attaching/detaching them and schduelr intengration.
unfortuntly that means that today you cannot achive your goal. if you need to attach addtional storage dynamically your only option is cinder volumes. you can provide addtional storage in the flavor but only statical
Best regards, Sang Tran