<html><body>
<p><tt><font size="2">Clint Byrum <clint@fewbar.com> wrote on 05/10/2013 07:17:59 PM:<br>
<br>
> From: Clint Byrum <clint@fewbar.com></font></tt><br>
<tt><font size="2">> To: openstack-dev <openstack-dev@lists.openstack.org></font></tt><br>
<tt><font size="2">> Date: 05/10/2013 07:28 PM</font></tt><br>
<tt><font size="2">> Subject: Re: [openstack-dev] [nova][ironic] making file injection <br>
> optional / removing it</font></tt><br>
<tt><font size="2">> <br>
> <br>
> +1 image content never being introspected by OpenStack. They are bytes<br>
> to be fed to a computer... be it kvm, xen, or a baremetal node.</font></tt><br>
<br>
<tt><font size="2">Understanding the filesystems, agreed. However, it's not that outrageous to manipulate the payloads that OSes natively know how to cope with when thrown at them by firmware and land in some sort of ramfs (cpio.gz, tar.gz, .wim). Of Linux, ESXi, and Windows the only one that requires anything ever be extracted to do something is Windows, the rest can have content appended without molesting canned content. I suspect that to be true of other OSes. For example in linux, even if you only implemented cpio.gz and the initrd that you get is cpio.lzma, no problem, initramfs unpack allows for mix and match. For Windows, you only need to do it with the latest version, and it can manipulate all the rest. It's the only delivery scheme I can see that would be durable in the face of arbitrary disk and network drivers baked into the image subjected to random hardware. Supporting very few formats gets you a long way, even as OSes churn on NTFS features, btrfs, ext3, ext4, vmfs format of varying dubious non-native implementations, all of them continue to support the rather small set of well understood archive formats to populate ramfs that they supported 10 years ago.</font></tt><br>
<br>
<tt><font size="2">> a) MITM attacks from other tenants are in theory possible if they can<br>
> spoof all of the right things.<br>
> b) This requires that instances be able to reach some Nova API node that<br>
> then has access to the rest of the nova infrastructure.<br>
> <br>
> Problem "b" is pretty simple to solve in Nova by having a more clear<br>
> hand-off between the sensitive bits of nova and the metadata service.<br>
> <br>
> Making the partition solution above more useful than HTTP metadata<br>
> services for baremetal would also mean having a secure way to send<br>
> the image to the node to boot, or problem "a" will not be solved. With<br>
> metadata, we have a separation of concerns here by accepting that PXE<br>
> and tftp are not secure and will need to be locked down, but the metadata<br>
> could be secure by simply using HTTPS.<br>
> <br>
> I'm not sure why this would be superior to just implementing HTTPS<br>
> metadata services. Images can have PKI built in for authentication<br>
> of the metadata host. This allows full control of the trust model by<br>
> image authors.<br>
> <br>
> To me, HTTPS metadata is as simple as partitions and filesystems, and<br>
> more flexible.</font></tt><br>
<br>
<tt><font size="2">Well, PXE as it exists today isn't the only option even for baremetal. There are authenticated remote tamper-resistent relationships to leverage available even to baremetal (e.g. bridge filtered conversations with switch, KCS or USB communication with BMC devices) to replace or augment PXE. Using PXE as-is and properly locking down is possible given the right networking equipment, but it is far easier for implementers to get that wrong than to get right. Certainly openstack must support plain old pxe, but having an available scheme where the correct featureset in the servers and/or switches to sidestep the whole pxe problem doesn't seem a bad idea (and a convenient kill switch for even trying PXE).</font></tt><br>
<br>
<tt><font size="2">I think it's pretty much a hard requirement to address the initial security before you can begin thinking of HTTPS as secure. After all, everything (urls, certs, CAs, other credentials, etc) has to be fed in some secure manner prior to HTTPS being meaningfully of much help to address the concerns. Switching to HTTPS in the middle of a process still leaves it vulnerable to attacks prior to the transition. If the actual content of the entire OS transfer does not have authenticity and integrity assurance, there is no PKI scheme that will be able to restore a secure situation at that point for any data transfer. Besides, the relationship must be mutually authenticated anyway. Even if the image can trust the infrastructure, if the infrastructure cannot be absolutely certain the request came from the intended instance then things are still ripe for hijacking in theory. Some sort of secret data must be securely injected into the instance or a secure means for the instance to authoritatively publish public key to a trusted other element is required.</font></tt><br>
<br>
<tt><font size="2">In short, I think it's a good idea (perhaps required) to be more ambitious about the security goals than typical PXE allows. It's also not as far out of reach as most believe. </font></tt><br>
<tt><font size="2"><br>
> <br>
> _______________________________________________<br>
> OpenStack-dev mailing list<br>
> OpenStack-dev@lists.openstack.org<br>
> <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
> <br>
</font></tt></body></html>