[Openstack-operators] Who's using TripleO in production?

Silence Dogood matt at nycresistor.com
Wed Aug 3 15:08:37 UTC 2016


the v1 helion product was a joke for deployment at scale.  I still don't
know whose hair brained idea it was to use OOO there and then. but it was
hair brained at best.  From my perspective the biggest issue with helion,
was insane architecture decisions like that one being made with no
adherence to the constraints of reality.

I recall at around early 2015 or so, at an operators meetup someone asking
if anyone was using OOO and the response was a room full of laughter.  And
yet by this point helion had already decided to proceed with it, despite
their own people telling them it would take years to make usable.

</rant>

I like the idea of OOO but it takes time to harden that sort of deployment
scenario.  And trying to build a generic tool to hit hardware in the wild
is an exercise in futility, to a point.  Crowbar actually kind of made
sense in so far as it was designed to let you write the connector bits
you'd need to write.  I figure over time OOO will be forced into that sort
of pattern as every automated deployment framework has been for the past 20
years or so.  It's amazing how many times i've seen people try to reinvent
this wheel, and how many times they've outright ignored the lessons of
those who went before.

-Matt

On Wed, Aug 3, 2016 at 9:00 AM, Fegan, Joe <joe.fegan at hpe.com> wrote:

> Hi folks,
>
>
>
> I agree. HP(E) were major contributors to TripleO in the early days, and
> our V1 Helion product was based on it. But, as Dan says, we wrote a new
> OpenStack installer from scratch for V2+. Mostly in Ansible. The sources
> are up on GitHub with an Apache2 license - feel free to take and use them.
> We call it HLM (Helion Lifecycle Manager) but you can call it whatever you
> want ;)
>
>
>
> Our production experience and customer feedback with V1, TripleO were and
> are … “eventful”. And hard to debug / restart / continue. That was the main
> motivation for a newer and better install/upgrade mechanism. Of course I’m
> biased lol ;) The group working on it in HPE are all ex-public cloud and/or
> HPC production background, so we hope that we always have the real user
> perspective in mind.
>
>
>
> Thanks,
>
> Joe.
>
>
>
>
>
> _______________________________________________
> OpenStack-operators mailing list
> OpenStack-operators at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20160803/0c6dc1a2/attachment.html>


More information about the OpenStack-operators mailing list