[Openstack-operators] Dealing with ITAR in OpenStack private clouds
Silence Dogood
matt at nycresistor.com
Wed Mar 22 23:37:50 UTC 2017
The thing that screwed us up at Ames back in the day was deleting misplaced
data ( should that happen ). Swift was basically incapable of it at the
time, cinder didn't even exist.
Ultimately I ended up heading in the direction of spinning up separate
cloud environments entirely for each cloud. Mind you we were using
modified bexar back then.
Last I saw you could do full zone isolation even network layer isolation (
with the right setup - greets to bigswitch ) and you were pretty okay then.
I see no reason why you con't stand up ITAR / non ITAR cloud components
then just build a cleaning system that deprovisioned gear ... cleaned it
and reprovisioned as needed to meet scaling needs ( albeit slowly ).
On Wed, Mar 22, 2017 at 7:29 PM, Blair Bethwaite <blair.bethwaite at gmail.com>
wrote:
> Could just avoid Glance snapshots and indeed Nova ephemeral storage
> altogether by exclusively booting from volume with your ITAR volume type or
> AZ. I don't know what other ITAR regulations there might be, but if it's
> just what JM mentioned earlier then doing so would let you have ITAR and
> non-ITAR VMs hosted on the same compute nodes as there would be no local
> HDD storage involved.
>
> On 23 Mar. 2017 2:28 am, "Jonathan D. Proulx" <jon at csail.mit.edu> wrote:
>
> On Tue, Mar 21, 2017 at 09:03:36PM -0400, Davanum Srinivas wrote:
> :Oops, Hit send before i finished
> :
> :https://info.massopencloud.org/wp-content/uploads/2016/03/
> Workshop-Resource-Federation-in-a-Multi-Landlord-Cloud.pdf
> :https://git.openstack.org/cgit/openstack/mixmatch
> :
> :Essentially you can do a single cinder proxy that can work with
> :multiple cinder backends (one use case)
>
> The mixmatch suff is interesting but it's designed ofr sharing rather
> than exclusion, is very young and adds complexity that's likely not
> wanted here. It is a good read though!
>
> For Block Storage you can have 'volume types' with different back ends and
> you can set quotas per project for each instance type. I've used this
> to deprecate old storage by setting quota on 'old' type to zero.
> Presumably you you have an ITAR type that ITAR projects had quota on
> and a nonITAR type for other projects and never the twains should
> meet.
>
> For VMS I use host aggregates and instance metadata to seperate
> 'special' hardware. Again instance access can be per project so
> having ITAR and notITAR aggregates and matiching instance types with
> appopriate access lists can likely solve that.
>
> I've not tried to do anything similar with Image Storage, so not sure
> if there's a way to restrict projects to specific glance stores. IF
> all images were nonITAR and only provisioned with restricted
> info after launch *maybe* you could get away with that, though I
> suppose you'd need to disallow snapshots for ITAR projects
> at least...perhaps someone has a better answer here.
>
> -Jon
>
> :
> :Thanks,
> :Dims
> :
> :On Tue, Mar 21, 2017 at 8:59 PM, Davanum Srinivas <davanum at gmail.com>
> wrote:
> :> Jonathan,
> :>
> :> The folks from Boston University have done some work around this idea:
> :>
> :> https://github.com/openstack/mixmatch/blob/master/doc/source
> /architecture.rst
> :>
> :>
> :> On Tue, Mar 21, 2017 at 7:33 PM, Jonathan Mills <jonmills at gmail.com>
> wrote:
> :>> Friends,
> :>>
> :>> I’m reaching out for assistance from anyone who may have confronted the
> :>> issue of dealing with ITAR data in an OpenStack cloud being used in
> some
> :>> department of the Federal Gov.
> :>>
> :>> ITAR (https://www.pmddtc.state.gov/regulations_laws/itar.html) is a
> less
> :>> restrictive level of security than classified data, but it has some
> thorny
> :>> aspects to it, particularly where media is concerned:
> :>>
> :>> * you cannot co-mingle ITAR and non-ITAR data on the same physical hard
> :>> drives, and any drive, once it has been “tainted” with any ITAR data,
> is now
> :>> an ITAR drive
> :>>
> :>> * when ITAR data is destroyed, a DBAN is insufficient — instead, you
> :>> physically shred the drive. No need to elaborate on how destructive
> this
> :>> can get if you accidentally mingle ITAR with non-ITAR
> :>>
> :>> Certainly the multi-tenant model of OpenStack holds great promise in
> Federal
> :>> agencies for supporting both ITAR and non-ITAR worlds, but great care
> must
> :>> be taken that *somehow* things like Glance and Cinder don’t get mixed
> up.
> :>> One must ensure that the ITAR tenants can only access Glance/Cinder in
> ways
> :>> such that their backend storage is physically separate from any
> non-ITAR
> :>> tenants. Certainly I understand that Glance/Cinder can support
> multiple
> :>> storage backend types, such as File & Ceph, and maybe that is an
> avenue to
> :>> explore to achieving the physical separation. But what if you want to
> have
> :>> multiple different File backends?
> :>>
> :>> Do the ACLs exist to ensure that non-ITAR tenants can’t access ITAR
> :>> Glance/Cinder backends, and vice versa?
> :>>
> :>> Or…is it simpler to just build two OpenStack clouds….?
> :>>
> :>> Your thoughts will be most appreciated,
> :>>
> :>>
> :>> Jonathan Mills
> :>>
> :>> NASA Goddard Space Flight Center
> :>>
> :>>
> :>> _______________________________________________
> :>> OpenStack-operators mailing list
> :>> OpenStack-operators at lists.openstack.org
> :>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstac
> k-operators
> :>>
> :>
> :>
> :>
> :> --
> :> Davanum Srinivas :: https://twitter.com/dims
> :
> :
> :
> :--
> :Davanum Srinivas :: https://twitter.com/dims
> :
> :_______________________________________________
> :OpenStack-operators mailing list
> :OpenStack-operators at lists.openstack.org
> :http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
> _______________________________________________
> OpenStack-operators mailing list
> OpenStack-operators at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
>
> _______________________________________________
> OpenStack-operators mailing list
> OpenStack-operators at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20170322/b0d9ae14/attachment.html>
More information about the OpenStack-operators
mailing list