[Openstack-operators] Dealing with ITAR in OpenStack private clouds
Blair Bethwaite
blair.bethwaite at gmail.com
Wed Mar 22 02:23:40 UTC 2017
Dims, it might be overkill to introduce multi-Keystone + federation (I just
quickly skimmed the PDF so apologies if I have the wrong end of it)?
Jon, you could just have multiple cinder-volume services and backends. We
do this in the Nectar cloud - each site has cinder AZs matching nova AZs.
By default the API won't let you attach a volume to a host in a
non-matching AZ, maybe that's enough for you(?), but you could probably
take it further with other cinder scheduler filters.
On 22 March 2017 at 12:03, Davanum Srinivas <davanum at gmail.com> wrote:
> Oops, Hit send before i finished
>
> https://info.massopencloud.org/wp-content/uploads/2016/
> 03/Workshop-Resource-Federation-in-a-Multi-Landlord-Cloud.pdf
> https://git.openstack.org/cgit/openstack/mixmatch
>
> Essentially you can do a single cinder proxy that can work with
> multiple cinder backends (one use case)
>
> Thanks,
> Dims
>
> On Tue, Mar 21, 2017 at 8:59 PM, Davanum Srinivas <davanum at gmail.com>
> wrote:
> > Jonathan,
> >
> > The folks from Boston University have done some work around this idea:
> >
> > https://github.com/openstack/mixmatch/blob/master/doc/
> source/architecture.rst
> >
> >
> > On Tue, Mar 21, 2017 at 7:33 PM, Jonathan Mills <jonmills at gmail.com>
> wrote:
> >> Friends,
> >>
> >> I’m reaching out for assistance from anyone who may have confronted the
> >> issue of dealing with ITAR data in an OpenStack cloud being used in some
> >> department of the Federal Gov.
> >>
> >> ITAR (https://www.pmddtc.state.gov/regulations_laws/itar.html) is a
> less
> >> restrictive level of security than classified data, but it has some
> thorny
> >> aspects to it, particularly where media is concerned:
> >>
> >> * you cannot co-mingle ITAR and non-ITAR data on the same physical hard
> >> drives, and any drive, once it has been “tainted” with any ITAR data,
> is now
> >> an ITAR drive
> >>
> >> * when ITAR data is destroyed, a DBAN is insufficient — instead, you
> >> physically shred the drive. No need to elaborate on how destructive
> this
> >> can get if you accidentally mingle ITAR with non-ITAR
> >>
> >> Certainly the multi-tenant model of OpenStack holds great promise in
> Federal
> >> agencies for supporting both ITAR and non-ITAR worlds, but great care
> must
> >> be taken that *somehow* things like Glance and Cinder don’t get mixed
> up.
> >> One must ensure that the ITAR tenants can only access Glance/Cinder in
> ways
> >> such that their backend storage is physically separate from any non-ITAR
> >> tenants. Certainly I understand that Glance/Cinder can support multiple
> >> storage backend types, such as File & Ceph, and maybe that is an avenue
> to
> >> explore to achieving the physical separation. But what if you want to
> have
> >> multiple different File backends?
> >>
> >> Do the ACLs exist to ensure that non-ITAR tenants can’t access ITAR
> >> Glance/Cinder backends, and vice versa?
> >>
> >> Or…is it simpler to just build two OpenStack clouds….?
> >>
> >> Your thoughts will be most appreciated,
> >>
> >>
> >> Jonathan Mills
> >>
> >> NASA Goddard Space Flight Center
> >>
> >>
> >> _______________________________________________
> >> OpenStack-operators mailing list
> >> OpenStack-operators at lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> >>
> >
> >
> >
> > --
> > Davanum Srinivas :: https://twitter.com/dims
>
>
>
> --
> Davanum Srinivas :: https://twitter.com/dims
>
> _______________________________________________
> OpenStack-operators mailing list
> OpenStack-operators at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
--
Cheers,
~Blairo
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20170322/6d73ba59/attachment.html>
More information about the OpenStack-operators
mailing list