[openstack-dev] [TripleO][Tuskar] Icehouse Requirements

Robert Collins robertc at robertcollins.net
Fri Dec 13 06:06:33 UTC 2013


On 13 December 2013 06:24, Will Foster <wfoster at redhat.com> wrote:

> I just wanted to add a few thoughts:

Thank you!

> For some comparative information here "from the field" I work
> extensively on deployments of large OpenStack implementations,
> most recently with a ~220node/9rack deployment (scaling up to 42racks / 1024
> nodes soon).  My primary role is of a Devops/Sysadmin nature, and not a
> specific development area so rapid provisioning/tooling/automation is an
> area I almost exclusively work within (mostly using API-driven
> using Foreman/Puppet).  The infrastructure our small team designs/builds
> supports our development and business.
>
> I am the target user base you'd probably want to cater to.

Absolutely!

> I can tell you the philosophy and mechanics of Tuskar/OOO are great,
> something I'd love to start using extensively but there are some needed
> aspects in the areas of control that I feel should be added (though arguably
> less for me and more for my ilk who are looking to expand their OpenStack
> footprint).
>
> * ability to 'preview' changes going to the scheduler

What does this give you? How detailed a preview do you need? What
information is critical there? Have you seen the proposed designs for
a heat template preview feature - would that be sufficient?

> * ability to override/change some aspects within node assignment

What would this be used to do? How often do those situations turn up?
Whats the impact if you can't do that?

> * ability to view at least minimal logging from within Tuskar UI

Logging of what - the deployment engine? The heat event-log? Nova
undercloud logs? Logs from the deployed instances? If it's not there
in V1, but you can get, or already have credentials for the [instances
that hold the logs that you wanted] would that be a big adoption
blocker, or just a nuisance?


> Here's the main reason - most new adopters of OpenStack/IaaS are going to be
> running legacy/mixed hardware and while they might have an initiative to
> explore and invest and even a decent budget most of them are not going to
> have
> completely identical hardware, isolated/flat networks and things set
> aside in such a way that blind auto-discovery/deployment will just work all
> the time.

Thats great information (and something I reasonably well expected, to
a degree). We have a hard dependency on no wildcard DHCP servers in
the environment (or we can't deploy). Autodiscovery is something we
don't have yet, but certainly debugging deployment failures is a very
important use case and one we need to improve both at the plumbing
layer and in the stories around it in the UI.

> There will be a need to sometimes adjust, and those coming from a more
> vertically-scaling infrastructure (most large orgs.) will not have
> 100% matching standards in place of vendor, machine spec and network design
> which may make Tuscar/OOO seem inflexible and 'one-way'.  This may just be a
> carry-over or fear of the old ways of deployment but nonetheless it
> is present.

I'm not sure what you mean by matching standards here :). Ironic is
designed to support extremely varied environments with arbitrary mixes
of IPMI/drac/ilo/what-have-you, and abstract that away for us. From a
network perspective I've been arguing the following:

 - we need routable access to the mgmt cards
 - if we don't have that (say there are 5 different mgmt domains with
no routing between them) then we install 5 deployment layers (5
underclouds) which could be as small as one machine each.
 - within the machines that are served by one routable region of mgmt
cards, we need no wildcard DHCP servers, for our DHCP server to serve
PXE to the machines (for the PXE driver in Ironic).
 - building a single region overcloud from multiple undercloud regions
will involve manually injecting well known endpoints (such as the
floating virtual IP for API endpoints) into some of the regions, but
it's in principle straightforward to do and use with the plumbing
layer today.

> In my case, we're lucky enough to have dedicated, near-identical
> equipment and a flexible network design we've architected prior that
> makes Tuskar/OOO a great fit.  Most people will not have this
> greenfield ability and will use what they have lying around initially
> as to not make a big investment until familiarity and trust of
> something new is permeated.
>
> That said, I've been working with Jaromir Coufal on some UI mockups of
> Tuskar with some of this 'advanced' functionality included and from
> my perspective it looks like something to consider pulling in sooner than
> later if you want to maximize the adoption of new users.

So, for Tuskar my short term goals are to support RH in shipping a
polished product while still architecting and building something
sustainable and suitable for integration into the OpenStack ecosystem.
(For instance, one of the requirements for integration is that we
don't [significantly] overlap other projects - and thats why I've been
pushing so hard on the don't-reimplement-the-scheduler aspect of the
discussion). Medium term I think we need to look at surfacing all the
capabilities of the plumbing we have today, and long term we need to
look at making it vendor-and-local-user-extensible.

-Rob

-- 
Robert Collins <rbtcollins at hp.com>
Distinguished Technologist
HP Converged Cloud



More information about the OpenStack-dev mailing list