[ironic][tripleo] RFC: deprecate the iSCSI deploy interface?

Arne Wiebalck arne.wiebalck at cern.ch
Tue Aug 25 06:30:14 UTC 2020


Hi Steve,

On 24.08.20 23:55, Steve Baker wrote:
> 
> On 25/08/20 12:05 am, Dmitry Tantsur wrote:
>>
>>
>> On Mon, Aug 24, 2020 at 1:52 PM Sean Mooney <smooney at redhat.com 
>> <mailto:smooney at redhat.com>> wrote:
>>
>>     On Mon, 2020-08-24 at 10:32 +0200, Dmitry Tantsur wrote:
>>     > Hi,
>>     >
>>     > On Mon, Aug 24, 2020 at 10:24 AM Arne Wiebalck
>>     <arne.wiebalck at cern.ch <mailto:arne.wiebalck at cern.ch>>
>>     > wrote:
>>     >
>>     > > Hi!
>>     > >
>>     > > CERN's deployment is using the iscsi deploy interface since we
>>     started
>>     > > with Ironic a couple of years ago (and we installed around
>>     5000 nodes
>>     > > with it by now). The reason we chose it at the time was
>>     simplicity: we
>>     > > did not (and still do not) have a Swift backend to Glance, and
>>     the iscsi
>>     > > interface provided a straightforward alternative.
>>     > >
>>     > > While we have not seen obscure bugs/issues with it, I can
>>     certainly back
>>     > > the scalability issues mentioned by Dmitry: the tunneling of
>>     the images
>>     > > through the controllers can create issues when deploying
>>     hundreds of
>>     > > nodes at the same time. The security of the iscsi interface is
>>     less of a
>>     > > concern in our specific environment.
>>     > >
>>     > > So, why did we not move to direct (yet)? In addition to the
>>     lack of
>>     > > Swift, mostly since iscsi works for us and the scalability
>>     issues were
>>     > > not that much of a burning problem ... so we focused on other
>>     things :)
>>     > >
>>     > > Here are some thoughts/suggestions for this discussion:
>>     > >
>>     > > How would 'direct' work with other Glance backends (like
>>     Ceph/RBD in our
>>     > > case)? If using direct requires to duplicate images from Glance to
>>     > > Ironic (or somewhere else) to be served, I think this would be an
>>     > > argument against deprecating iscsi.
>>     > >
>>     >
>>     > With image_download_source=http ironic will download the image
>>     to the
>>     > conductor to be able serve it to the node. Which is exactly what
>>     the iscsi
>>     > is doing, so not much of a change for you (except for
>>     s/iSCSI/HTTP/ as a
>>     > means of serving the image).
>>     >
>>     > Would it be an option for you to test direct deploy with
>>     > image_download_source=http?
>>     i think if there is still an option to not force deployemnt to
>>     altere any of there
>>     other sevices this is likely ok but i think the onious shoudl be
>>     on the ironic
>>     and ooo teams to ensure there is an upgrade path for those useres
>>     before this deprecation
>>     becomes a removal without deploying swift or a swift compatibale
>>     api e.g. RadosGW
>>
>>
>> Swift is NOT a requirement (nor is RadosGW) when 
>> image_download_source=http is used. Any glance backend (or no glance 
>> at all) will work.
> 
> Even though the TripleO undercloud has swift, I'd be inclined to do 
> image_download_source=http so that it can scale out to minions, and so 
> we're not relying on a single-node swift for image serving

This makes it sound a little like 'direct' with 
image_download_source=http would be easily scalable ... but it is only 
if you can (and are willing to) scale the Ironic control plane through 
which the images are still tunneled (and Glance behind it ... not sure 
if there is any caching of images inside the Ironic controllers). Seems 
to be the case for you and TripleO, but it may not be the case in other 
setups, using conductor groups may complicated things, for instance.

So, from what I see, image_download_source=http is a good option to move 
deployments off the iscsi deploy interface, but it does not bring the
same (scalability) advantages you would get from a setup where Glance is
backed by a scalable Swift or RadosGW backend.

Cheers,
  Arne



More information about the openstack-discuss mailing list