[nova][tc] Removing md5 in password injection
Hi, I didn't feel it would be controversial, though it seems removing md5 password injection is still up to debate: https://review.opendev.org/c/openstack/nova/+/935512 Of course, I'd like the TC to agree with me that injecting md5-hashed passwords is, in 2024, to be considered a security problem that should be fixed (and backported) ASAP. BTW, IMO this patch could be using the new feature from oslo_utils.secretutils that Takashi managed to get in: https://review.opendev.org/c/openstack/oslo.utils/+/931899 https://review.opendev.org/c/openstack/oslo.utils/+/935525 These, IMO, should also be backported to earlier oslo.utils releases, so we can fix earlier OpenStack releases in a nicer way. Cheers, Thomas Goirand (zigo)
On 2024-12-10 14:50:39 +0100 (+0100), Thomas Goirand wrote:
I didn't feel it would be controversial, though it seems removing md5 password injection is still up to debate:
https://review.opendev.org/c/openstack/nova/+/935512
Of course, I'd like the TC to agree with me that injecting md5-hashed passwords is, in 2024, to be considered a security problem that should be fixed (and backported) ASAP.
BTW, IMO this patch could be using the new feature from oslo_utils.secretutils that Takashi managed to get in: https://review.opendev.org/c/openstack/oslo.utils/+/931899 https://review.opendev.org/c/openstack/oslo.utils/+/935525
While I agree, this will need extensive manual testing with a wide variety of guest operating systems. See the PORTABILITY NOTES section of the crypt(3) manpage, but basically POSIX doesn't guarantee support for any particular hashes and options, so just because the host where libcrypt is being called supports a particular combination, that doesn't mean the guest it's injected into will be able to parse it. I also agree with comments on the nova change that this mechanism ought to be at least strongly discouraged for use on any platforms where local agents are able to set passwords from metadata (sounds like it already is?), since is neatly sidesteps the portability problem. Deprecation/removal would be great, but it sounds like Windows doesn't have a functional guest agent capable of this?
These, IMO, should also be backported to earlier oslo.utils releases, so we can fix earlier OpenStack releases in a nicer way.
I doubt we'll get consensus on this. As you say, it will explicitly drop support for some older guest platforms, which doesn't seem consistent with our usual policy for bug fixes on stable branches. That said, if it's a long-time deprecated feature anyway, maybe loss of some functionality in it is less risky (just in this specific case)? -- Jeremy Stanley
On 10/12/2024 15:38, Jeremy Stanley wrote:
On 2024-12-10 14:50:39 +0100 (+0100), Thomas Goirand wrote:
I didn't feel it would be controversial, though it seems removing md5 password injection is still up to debate:
https://review.opendev.org/c/openstack/nova/+/935512
Of course, I'd like the TC to agree with me that injecting md5-hashed passwords is, in 2024, to be considered a security problem that should be fixed (and backported) ASAP.
BTW, IMO this patch could be using the new feature from oslo_utils.secretutils that Takashi managed to get in: https://review.opendev.org/c/openstack/oslo.utils/+/931899 https://review.opendev.org/c/openstack/oslo.utils/+/935525
yes it could but that obviously is also not back portable to stable branches since we can rely on those being aviabale. that is fine on master but if you wanted to use this in master and also have somethign backportable we would have to do an opportunistic import and fallback to the existign code or have this be two different patches. a backportable chagne and a master only change. we cant raise our min oslo utils version on stable to require this without breaking stable policy.
While I agree, this will need extensive manual testing with a wide variety of guest operating systems. See the PORTABILITY NOTES section of the crypt(3) manpage, but basically POSIX doesn't guarantee support for any particular hashes and options, so just because the host where libcrypt is being called supports a particular combination, that doesn't mean the guest it's injected into will be able to parse it.
I also agree with comments on the nova change that this mechanism ought to be at least strongly discouraged for use on any platforms where local agents are able to set passwords from metadata (sounds like it already is?), since is neatly sidesteps the portability problem. Deprecation/removal would be great, but it sounds like Windows doesn't have a functional guest agent capable of this?
the only non deprecated way to inject an admin password in nova today is the qemu guest agent. the part of the code that is using md5 is part of the deprecated file injection code path for setting the admin password. it may not be explicitly called out in https://specs.openstack.org/openstack/nova-specs/specs/queens/implemented/de... but it is deprecated for removal since queens.
These, IMO, should also be backported to earlier oslo.utils releases, so we can fix earlier OpenStack releases in a nicer way. I doubt we'll get consensus on this. As you say, it will explicitly drop support for some older guest platforms, which doesn't seem consistent with our usual policy for bug fixes on stable branches. That said, if it's a long-time deprecated feature anyway, maybe loss of some functionality in it is less risky (just in this specific case)?
at the ptg we discussed finally just removing the file base password injection feature https://etherpad.opendev.org/p/r.4f297ee4698e02c16c4007f7ee76b7c1#L541 i would prefer to actually do that instead of modify it to use sha512. with that said we also accepted that we could just use oslo_utils.secretutils to optional support crypt package if installed if there was a strong objection to removing this code. are you raising such an objection? ill also note that file inject is off by default so for md5 code to execute you would have to opt into the file injection feature when deploying your cloud which is consider insecure and an end user woudl have had to request it in the api. file injection in this fashion breaks usecase related to confidential computing as we are actively modifying the content of the root disk, and it does not work with boot from volume guests. i strongly suspect it wont work properly with nova backed by ceph too at least in the case were we are doing the efficient thin cloning of the volumes. so the utility of what little support still exists is questionable.
On 2024-12-10 19:40:37 +0000 (+0000), Sean Mooney wrote: [...]
at the ptg we discussed finally just removing the file base password injection feature
https://etherpad.opendev.org/p/r.4f297ee4698e02c16c4007f7ee76b7c1#L541
i would prefer to actually do that instead of modify it to use sha512.
with that said we also accepted that we could just use oslo_utils.secretutils to optional support crypt package if installed if there was a strong objection to removing this code. are you raising such an objection? [...]
I'm not. Being neither a cloud operator nor Windows user, I'm unable to gauge the degree to which this functionality is useful. Thomas had indicated that it was the only option for setting initial admin passwords on new Windows guests (short of baking them into your images, I guess? or automatically enrolling them in something like Active Directory/LDAP maybe). I don't personally know what people who boot Windows in OpenStack clouds normally do. -- Jeremy Stanley
Hi, Cloud operator with Windows users here: yes initial passwords for new Windows guests. Cheers, Kees On 10-12-2024 21:07, Jeremy Stanley wrote:
I'm not. Being neither a cloud operator nor Windows user, I'm unable to gauge the degree to which this functionality is useful. Thomas had indicated that it was the only option for setting initial admin passwords on new Windows guests (short of baking them into your images, I guess? or automatically enrolling them in something like Active Directory/LDAP maybe). I don't personally know what people who boot Windows in OpenStack clouds normally do.
On 11/12/2024 06:33, Kees Meijs | Nefos wrote:
Hi,
Cloud operator with Windows users here: yes initial passwords for new Windows guests.
so there are two other ways to set that you can use cloud-init cloudbase solutions provide cloudbase-init for windows to do the same thing as cloud-init or glean on linux. https://cloudbase.it/cloudbase-init/ the other way is to install the qemu guest agent in the the windows image. your correct that if you cant modify the windows image to include any agent (runtime (qemu) or first-boot (cloudbase-init)) then file injection based password setting is the final option but its not the only one. openstack publishes a image guide for how to create image to use with openstack https://docs.openstack.org/image-guide/ and as part of that we direct opertors to use the cloud tooling produced by cloudbase https://docs.openstack.org/image-guide/obtain-images.html#microsoft-windows in the image requirement section we also call out tha tcloud image are expected to process user-data https://docs.openstack.org/image-guide/openstack-images.html#process-user-da... so that the password among other things can be set in a portable way across cloud plathforms. while you can use iamge that dont conform to this gudie, it still documents the best practice for operators and users to follow to ensure a good end to end experince. it sound like the windows image you mange dont follow those recommendations. the file injection code path should eventually be removed form nova so long term keeping it for this use case when it does not work in a bunch of configurations i.e. booting from a cinder volume is not a permanent solution. even for windows guest its generally recommend to use x509 cert instead of passwords for remote management the admin password is really for a VDI workflow where you would expect someone to connect to the vm via the instance console. This is not the primary workflow that nova was desgiend for as the console is really for debuging but its a supproted one and pretty common one if you are using spice too have a richer console experince. note that we also have a write once mechanic where by a first boot agent can generate an admin password withing the guest and post that back to the metadta api to save it to the nova db. that was specificaly added for cloudbase-init if i recall and it only works if you have a x509 or ssh keypair assocated with the vm as that is used to encypet/decypt it regards sean.
Cheers, Kees
On 10-12-2024 21:07, Jeremy Stanley wrote:
I'm not. Being neither a cloud operator nor Windows user, I'm unable to gauge the degree to which this functionality is useful. Thomas had indicated that it was the only option for setting initial admin passwords on new Windows guests (short of baking them into your images, I guess? or automatically enrolling them in something like Active Directory/LDAP maybe). I don't personally know what people who boot Windows in OpenStack clouds normally do.
Hi, Thanks, we'll see if we can migrate and go with cloud-init (doing that with Linux already.) Cheers, Kees On 11-12-2024 12:32, Sean Mooney wrote:
so there are two other ways to set that
you can use cloud-init cloudbase solutions provide cloudbase-init for windows to do the same thing as cloud-init or glean on linux.
https://cloudbase.it/cloudbase-init/
the other way is to install the qemu guest agent in the the windows image.
your correct that if you cant modify the windows image to include any agent (runtime (qemu) or first-boot (cloudbase-init)) then file injection based password setting is the final option but its not the only one.
openstack publishes a image guide for how to create image to use with openstack https://docs.openstack.org/image-guide/
and as part of that we direct opertors to use the cloud tooling produced by cloudbase
https://docs.openstack.org/image-guide/obtain-images.html#microsoft-windows
in the image requirement section we also call out tha tcloud image are expected to process user-data https://docs.openstack.org/image-guide/openstack-images.html#process-user-da...
so that the password among other things can be set in a portable way across cloud plathforms.
while you can use iamge that dont conform to this gudie, it still documents the best practice for operators and users to follow to ensure a good end to end experince. it sound like the windows image you mange dont follow those recommendations.
the file injection code path should eventually be removed form nova so long term keeping it for this use case when it does not work in a bunch of configurations i.e. booting from a cinder volume is not a permanent solution.
even for windows guest its generally recommend to use x509 cert instead of passwords for remote management the admin password is really for a VDI workflow where you would expect someone to connect to the vm via the instance console. This is not the primary workflow that nova was desgiend for as the console is really for debuging but its a supproted one and pretty common one if you are using spice too have a richer console experince.
note that we also have a write once mechanic where by a first boot agent can generate an admin password withing the guest and post that back to the metadta api to save it to the nova db. that was specificaly added for cloudbase-init if i recall and it only works if you have a x509 or ssh keypair assocated with the vm as that is used to encypet/decypt it
On 2024-12-11 11:32:36 +0000 (+0000), Sean Mooney wrote: [...]
note that we also have a write once mechanic where by a first boot agent can generate an admin password withing the guest and post that back to the metadta api to save it to the nova db. [...]
Tangential, sorry, but could this sort of mechanism also be used to provide an out-of-band verification of SSH hostkeys generated on first boot for *nix type guests? We used to splat them to the primary terminal and then try to scrape them from the console log, but having a structured API response would be a lot cleaner (or did something similar already get added and I missed it?). -- Jeremy Stanley
On 11/12/2024 14:42, Jeremy Stanley wrote:
On 2024-12-11 11:32:36 +0000 (+0000), Sean Mooney wrote: [...]
note that we also have a write once mechanic where by a first boot agent can generate an admin password withing the guest and post that back to the metadta api to save it to the nova db. [...]
Tangential, sorry, but could this sort of mechanism also be used to provide an out-of-band verification of SSH hostkeys generated on first boot for *nix type guests? We used to splat them to the primary terminal and then try to scrape them from the console log, but having a structured API response would be a lot cleaner (or did something similar already get added and I missed it?).
so this "feature" is more or less undocumented and i believe it was originally added because AWS ec2 before we started to use specs for things like this. its something we rediscover is a thing every few years when we do a code audit. so in that sense the current code is kind of a special snowflake and not directly usable for your use case. its basically this elfi block https://github.com/openstack/nova/blob/f729a7fb133b1cda467ac6be2a05775769bff... which no other metadata path supports. it was added by https://github.com/openstack/nova/commit/a2101c4e7017715af0a29675b89e14ee288... in grizzly With that said i see the merrit in supprot some kind of reporting for limited trusted boot type use cases such as providing the ssh host key fingerprint so you can validate that the vm is the one you expect it to be. im not sure how i would feel about allowing arbiarty metadata to be store this way. is there something beyond the host key that would be useful for you to record on boot? perhaps we can try and capture this is a spec or separate thread and work out what an mvp would look like. there would basically be two part, one extednign the metadata api to support recording the host key fingerprint somewhere, and then making that discoverable via the main api perhaps in server show. is we allowed read write access to all instance metadata we would would not actually need to modify the main api as you can already list metadata on an instance but im worried that could be abused so i don't know if that is a good idea. the nova metadata api is not intended a a high performance key value store so i dont necessarily think it would be a good idea to extned this to a generic crud api but we certainly could provide limited support for targeted uscease liek the one you raise.
On 2024-12-11 16:30:40 +0000 (+0000), Sean Mooney wrote: [...]
Tangential, sorry, but could this sort of mechanism also be used to provide an out-of-band verification of SSH hostkeys generated on first boot for *nix type guests? We used to splat them to the primary terminal and then try to scrape them from the console log, but having a structured API response would be a lot cleaner (or did something similar already get added and I missed it?). [...] i see the merrit in supprot some kind of reporting for limited trusted boot type use cases such as providing the ssh host key fingerprint so you can validate that the vm is the one you expect it to be.
im not sure how i would feel about allowing arbiarty metadata to be store this way.
is there something beyond the host key that would be useful for you to record on boot?
Nothing else comes to mind. Basically it's the desire to avoid ToFU style finger-crossing that the SSH host key you get on initially connecting to a newly booted server instance is really coming from that instance and not a MitM or honeypot. It's possible in most providers to create an authenticated backchannel by abusing the console log, as I described, but that has its own challenges. I suppose there might be alternative management protocols similar to SSH which rely on asymmetric key algorithms to authenticate hosts, but I'm not personally familiar with any (none with the near ubiquity of SSH at any rate).
perhaps we can try and capture this is a spec or separate thread and work out what an mvp would look like. there would basically be two part, one extednign the metadata api to support recording the host key fingerprint somewhere, and then making that discoverable via the main api perhaps in server show.
That would make sense, and it would probably also be a good idea to try to involve e.g. a cloud-init maintainer in any such design discussion, as this would need client-side integration eventually anyway in whatever's going to communicate the key back to nova.
is we allowed read write access to all instance metadata we would would not actually need to modify the main api as you can already list metadata on an instance but im worried that could be abused so i don't know if that is a good idea.
Yes, that sounds like a security nightmare.
the nova metadata api is not intended a a high performance key value store so i dont necessarily think it would be a good idea to extned this to a generic crud api but we certainly could provide limited support for targeted uscease liek the one you raise.
Agreed, I suppose the field could be given a more generic name in case certain kinds of guests wanted to store functionally similar things in it, but as I said I don't know what else that would end up being anyway. -- Jeremy Stanley
On Thu, Dec 12, 2024 at 3:31 AM Sean Mooney <smooney@redhat.com> wrote:
On 11/12/2024 14:42, Jeremy Stanley wrote:
On 2024-12-11 11:32:36 +0000 (+0000), Sean Mooney wrote: [...]
note that we also have a write once mechanic where by a first boot agent can generate an admin password withing the guest and post that back to the metadta api to save it to the nova db. [...]
Tangential, sorry, but could this sort of mechanism also be used to provide an out-of-band verification of SSH hostkeys generated on first boot for *nix type guests? We used to splat them to the primary terminal and then try to scrape them from the console log, but having a structured API response would be a lot cleaner (or did something similar already get added and I missed it?).
so this "feature" is more or less undocumented and i believe it was originally added because AWS ec2 before we started to use specs for things like this. its something we rediscover is a thing every few years when we do a code audit
This "feature" was added at the request of mordred (Monty Taylor) IIRC. He was specifically concerned about knowing that the machine that he had requested was the one he was actually connecting to. At the time I had reliability concerns because parsing the console log is actually a little hard to do reliably. Vendordata is a similar use case and was added at the request of Adam Young from Red Hat for use with registering new instances with Active Directory or FreeIPA IIRC -- the idea was that you could provide a cryptographic token to the instance in metadata, and then possession of that token at instance registration time would prove that you were in fact the requested instance. SPIFFE calls this problem space "identity zero" if that helps. AWS solves it these days with a thing called an "instance identity document" which is part of the instance metadata. Possession of that document proves you are a specific instance (it does however not stop other software on that instance from also proving that thing). https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-identity-docume... I wonder if injecting the ssh hostkey via user-data is a meaningful way to solve these problems as well? I say all of this by way of trying to say: - parsing the console log for ssh host keys makes me sad because its not reliable. - vendordata with a join token is probably a better solution. - although it would be relatively easy to add an equivalent of the identity document to our metadata service if we wanted to. (AWS also has an in guest agent called SSM, but I think that would be a lot more work to implement in Nova, especially with the number of hypervisors it would be expected to support). I hope this helps, Michael
On 2024-12-12 10:28:30 +1100 (+1100), Michael Still wrote: [...]
This "feature" was added at the request of mordred (Monty Taylor) IIRC. He was specifically concerned about knowing that the machine that he had requested was the one he was actually connecting to. At the time I had reliability concerns because parsing the console log is actually a little hard to do reliably. [...]
Yes, this dates back to the early days of us collectively (the OpenDev Collaboratory, nee Infrastructure project team, nee CI team) trying to establish a solution to that challenge.
SPIFFE calls this problem space "identity zero" if that helps. AWS solves it these days with a thing called an "instance identity document" which is part of the instance metadata. Possession of that document proves you are a specific instance (it does however not stop other software on that instance from also proving that thing). https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-identity-docume...
How does that work in practice? I guess you'd have to connect to the instance, temporarily accept the dubious hostkey, then read back the identity document from a shell on the instance? It doesn't necessarily get around ToFU, but does at least give a clear signal once you've tentatively accepted the key as to whether you can keep or wipe it and discard the instance/report the incident.
I wonder if injecting the ssh hostkey via user-data is a meaningful way to solve these problems as well? [...]
It could, though at the expense of the private part of the hostkey existing (even temporarily) somewhere other than the server instance itself. If it's generated on the instance and only the public component of the pair is disclosed, then there's less debate as to whether it's possibly been leaked. But for a lot of risk models this is likely fine yes (after all, it's not like the end user can 100% trust the provider's storage backend, staff, whatever, so at some point you have to decide what amount of risk is acceptable and move on). -- Jeremy Stanley
On Thu, Dec 12, 2024 at 10:59 AM Jeremy Stanley <fungi@yuggoth.org> wrote:
On 2024-12-12 10:28:30 +1100 (+1100), Michael Still wrote: [...]
This "feature" was added at the request of mordred (Monty Taylor) IIRC. He was specifically concerned about knowing that the machine that he had requested was the one he was actually connecting to. At the time I had reliability concerns because parsing the console log is actually a little hard to do reliably. [...]
Yes, this dates back to the early days of us collectively (the OpenDev Collaboratory, nee Infrastructure project team, nee CI team) trying to establish a solution to that challenge.
SPIFFE calls this problem space "identity zero" if that helps. AWS solves it these days with a thing called an "instance identity document" which is part of the instance metadata. Possession of that document proves you are a specific instance (it does however not stop other software on that instance from also proving that thing).
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-identity-docume...
How does that work in practice? I guess you'd have to connect to the instance, temporarily accept the dubious hostkey, then read back the identity document from a shell on the instance? It doesn't necessarily get around ToFU, but does at least give a clear signal once you've tentatively accepted the key as to whether you can keep or wipe it and discard the instance/report the incident.
Oh sorry, I should have provided more detail here. SPIFFE is a specification. The reference implementation, SPIRE, solves this problem by running an agent on the instance. That agent collects the identity document on boot, and then contacts a SPIRE server to turn that into an X.509 certificate that attests to the identity of the instance. That's then used for things like the client side of a certificate exchange for mTLS. I guess in this case you'd use that X.509 certificate as the "ssh hostkey", although this isn't my field enough to understand if that sentence actually makes sense. That is, SPIFFE / SPIRE are more focused on the mTLS identity use case than the ssh identity use case. You could certainly connect to the instance and verify it had a trusted identity issued by SPIRE? As an aside, there once was an OpenStack SPIRE attestor (what the agent bit that does the identity document is called), but it appears to have bitrotted -- https://github.com/zlabjp/spire-openstack-plugin/blob/master/doc/openstack-i.... That page links to https://docs.google.com/document/d/1HkK3Q74yYiqckBMI-h9FrZdlWEkrY5R4uHbXRqSR..., which correctly notes that there is probably something cool with vTPMs that could be done in this space, although I haven't spent a lot of time thinking about it. Cheers, Michael
On 11/12/2024 23:28, Michael Still wrote:
On Thu, Dec 12, 2024 at 3:31 AM Sean Mooney <smooney@redhat.com> wrote:
On 11/12/2024 14:42, Jeremy Stanley wrote: > On 2024-12-11 11:32:36 +0000 (+0000), Sean Mooney wrote: > [...] >> note that we also have a write once mechanic where by a first boot >> agent can generate an admin password withing the guest and post >> that back to the metadta api to save it to the nova db. > [...] > > Tangential, sorry, but could this sort of mechanism also be used to > provide an out-of-band verification of SSH hostkeys generated on > first boot for *nix type guests? We used to splat them to the > primary terminal and then try to scrape them from the console log, > but having a structured API response would be a lot cleaner (or did > something similar already get added and I missed it?).
so this "feature" is more or less undocumented and i believe it was originally added because AWS ec2 before we started to use specs for things like this. its something we rediscover is a thing every few years when we do a code audit
This "feature" was added at the request of mordred (Monty Taylor) IIRC. He was specifically concerned about knowing that the machine that he had requested was the one he was actually connecting to. At the time I had reliability concerns because parsing the console log is actually a little hard to do reliably.
Vendordata is a similar use case and was added at the request of Adam Young from Red Hat for use with registering new instances with Active Directory or FreeIPA IIRC -- the idea was that you could provide a cryptographic token to the instance in metadata, and then possession of that token at instance registration time would prove that you were in fact the requested instance.
SPIFFE calls this problem space "identity zero" if that helps. AWS solves it these days with a thing called an "instance identity document" which is part of the instance metadata. Possession of that document proves you are a specific instance (it does however not stop other software on that instance from also proving that thing). https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-identity-docume...
I made a proof-of-concept for nova instance identity documents here https://github.com/bbc/nova/commit/382984a3a23032c96089cb5877a55e425db7cee4 inspired by the AWS implementation. Unfortunatley I have never had time to take this any further. Jonathan.
On Thu, Dec 12, 2024 at 9:27 PM Jonathan Rosser < jonathan.rosser@rd.bbc.co.uk> wrote: I made a proof-of-concept for nova instance identity documents here
https://github.com/bbc/nova/commit/382984a3a23032c96089cb5877a55e425db7cee4 inspired by the AWS implementation.
Unfortunatley I have never had time to take this any further.
Thanks for the example code -- I think this is now at least two alternative implementations that I've seen, which definitely indicates "market demand" in my mind. As one of the main perpetrators of Nova's metadata and vendordata implementations, I definitely prefer the "in nova" approach you've taken here. It seems much less fragile than something external to nova. I also think that adding a URL / file to the metadata is a relatively safe operation, especially now that we're no longer pretending to be EC2 in metadata like we did back in the day. That said, I think your PR also surfaces the hard bit here without providing a strong solution -- in order to sign the identity document, we need a way for every hypervisor to have access to a private key and its password. That's true if we go the JWT route (which I'd have to think more about), or if we simply provide a JSON file with a signature in a separate URL / file (which is how EC2 does this). I am concerned about schemes which place that secret on every hypervisor because if it leaks we'd have a pretty big problem. I can think of alternative schemes though: * we could generate this document and its signature somewhere more central and then ship it around in the database. I am not sure where that central place would be though. * we could build out a PKI tree in the deployment, with perhaps each hypervisor having an intermediate certificate hanging off the "deployment root certificate". However, either of those schemes is going to be a lot more code than the actual metadata implementation itself. There are some similar things happening already -- certificates for libvirt TLS and the not-yet-landed certificates for SPICE console connections spring to mind, but in general Nova assumes those are created during deployment and managed externally. Sorry to create more problems than I'm solving, but I think my thinking might be iterating down to "this looks useful, but fiddly to implement". Cheers, Michael
participants (6)
-
Jeremy Stanley
-
Jonathan Rosser
-
Kees Meijs | Nefos
-
Michael Still
-
Sean Mooney
-
Thomas Goirand