[openstack-dev] Announcing Tuskar project and PTL nominations
Sylvain Bauza
sylvain.bauza at bull.net
Thu Aug 22 13:20:07 UTC 2013
Hi Martyn,
Le 22/08/2013 13:23, Martyn Taylor a écrit :
> Hi Sylvain,
>
> We are currently working on design docs. We'll be adding some
> architecture diagrams and description to our documentation soon.
>
Nice to know, thanks.
> To answer your question re: provisioning and images.
>
> In our currently implementation (which is very early days), we took
> images that were built from by Triple O, namely the overcloud
> non-compute and compute images and we use these directly. in the demo
> environment you seen in the video, we used Triple O CI to set up the
> machine, register the relevant overcloud images with glance and so on.
>
> In Tuskar, we lifted from TripleO a copy of the triple-o overcloud
> heat template and made some modifications. We split out the
> non-compute and compute sections, this allows us to add multiple
> entries of each of the non-compute and compute sections (based on what
> is registered in Tuskar) we then add a section to enforce deployment
> of the particular images onto particular bare metal machines. (This
> allows us to match hardware to OpenStack services). We do this by
> using the force_hosts capability in the nova bare metal driver:
> https://blueprints.launchpad.net/nova/+spec/baremetal-force-node.
>
> We also add some extra commands to the Heat template to registers
> flavors and associates the flavors, host aggregates and baremetal
> nodes in the overcloud nova control instance. This allows us to tell
> the nova scheduler to match any instance requests with flavors that
> were registered with a resource class in Tuskar with particular
> hardware that has also been added to that resource class.
>
Thanks for the explanation, I'm understanding more. So, basically, *and
I understand this is a POC*, your API allows to dynamically build yaml
templates for injecting Openstack components thanks to os-apply-config
and os-refresh-config, packed in some images built by disk-image-builder ?
> As Tomas mentioned, our initial release is really just a Proof of
> Concept. We'll be working to add more complex features and probably
> rework much of our "short cuts". Our aim though is to contribute as
> much as possible (or as much that makes sense) of Tuskar upstream into
> TripleO or any other component that we utilize and extend and have
> Tuskar really concentrate on how to utilize existing components to
> manage and deploy an OpenStack at large scale.
>
Totally understand. So, my next question is : do you have kind of a
roadmap, which could tell us when you feel confident for releasing
something ready for lab usage ? =)
Thanks,
-Sylvain
> Regards
> Martyn
>
>
> On 21/08/13 16:15, Sylvain Bauza wrote:
>> Hi Tomas,
>>
>> Are there any design docs which could explain how you provision the
>> baremetal hosts ?
>> As far as I can see, it seeems you're relying on TripleO heat
>> templates, right ?
>>
>> Are you then using disk-image-builder ?
>>
>> Thanks,
>> -Sylvain
>>
>> PS : I just looked at the Youtube demo
>> https://www.youtube.com/watch?v=VEY035-Lyzo
>>
>>
>> Le 21/08/2013 14:32, Tomas Sedovic a écrit :
>>> Hi everyone,
>>>
>>> We would like to announce Tuskar, an OpenStack management service.
>>>
>>> Our goal is to provide an API and UI to install and manage OpenStack
>>> at larger scale: where you deal with racks, different hardware
>>> classes for different purposes (storage, memory vs. cpu-intensive
>>> compute), the burn-in process, monitoring the HW utilisation, etc.
>>>
>>> Some of this will overlap with TripleO, Ceilometer and possibly
>>> other projects. In that case, we will work with the projects to
>>> figure out the best place to fix rather than duplicating effort and
>>> playing in our own sandbox.
>>>
>>>
>>> Current status:
>>>
>>> There's a saying that if you're not embarrassed by your first
>>> release, you've shipped too late.
>>>
>>> I'm happy to say, we are quite embarrassed :-)
>>>
>>> We've got a prototype that allows us to define different hardware
>>> classes and provision the racks with the appropriate images, then
>>> add new racks and have them provisioned.
>>>
>>> We've got a Horizon dashboard plugin that shows the general
>>> direction we want to follow and we're looking into integrating
>>> Ceilometer metrics and alarms.
>>>
>>> However, we're still tossing around different ideas and things are
>>> very likely to change.
>>>
>>> Our repositories are on Stackforge:
>>>
>>> https://github.com/stackforge/tuskar
>>> https://github.com/stackforge/python-tuskarclient
>>> https://github.com/stackforge/tuskar-ui
>>>
>>> And we're using Launchpad to manage our bugs and blueprints:
>>>
>>> https://launchpad.net/tuskar
>>> https://launchpad.net/tuskar-ui
>>>
>>> If you want to talk to us, pop in the #tuskar IRC channel on
>>> Freenode or send an email to openstack-dev at lists.launchpad.net with
>>> "[Tuskar]" in the subject.
>>>
>>>
>>> PTL:
>>>
>>> Talking to OpenStack developers, we were advised to elect the PTL
>>> early.
>>>
>>> Since we're nearing the end of the Havana cycle, we'll elect the PTL
>>> for a slightly longer term -- the rest of Havana and throughout
>>> Icehouse. The next election will coincide with those of the official
>>> OpenStack projects.
>>>
>>> If you are a Tuskar developer and want to nominate yourself, please
>>> send an email to openstack-dev at lists.launchpad.net with subject
>>> "Tuskar PTL candidacy".
>>>
>>> The self-nomination period will end on Monday, 26th August 2013,
>>> 23:59 UTC.
>>>
>>>
>>> --
>>> Tomas Sedovic
>>>
>>> _______________________________________________
>>> OpenStack-dev mailing list
>>> OpenStack-dev at lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20130822/7b390cb1/attachment.html>
More information about the OpenStack-dev
mailing list