[ironic][tripleo] My PTG & Forum notes

Dmitry Tantsur dtantsur at redhat.com
Wed May 8 14:56:06 UTC 2019


On 5/8/19 11:18 AM, Bogdan Dobrelya wrote:
> On 07.05.2019 19:47, Dmitry Tantsur wrote:
>> Hi folks,
>>
>> I've published my personal notes from the PTG & Forum in Denver: 
>> https://dtantsur.github.io/posts/ironic-denver-2019/
>> They're probably opinionated and definitely not complete, but I still think 
>> they could be useful.
>>
>> Also pasting the whole raw RST text below for ease of commenting.
>>
>> Cheers,
>> Dmitry
>>
>>
>> Keynotes
>> ========
>>
>> The `Metal3`_ project got some spotlight during the keynotes. A (successful!)
>> `live demo`_ was done that demonstrated using Ironic through Kubernetes API to
>> drive provisioning of bare metal nodes.
> 
> this is very interesting to consider for TripleO integration alongside (or 
> alternatively?) standalone Ironic, see my note below
> 
>>
>> The official `bare metal program`_ was announced to promote managing bare metal
>> infrastructure via OpenStack.
>>
<snip>
>>
>> PTG: TripleO
>> ============
>>
>> We discussed our plans for removing Nova from the TripleO undercloud and
>> moving bare metal provisioning from under control of Heat. The plan from the
> 
> I wish we could have Metal3 provisioning via K8s API adapted for Undercloud in 
> TripleO. Probably via a) standalone kubelet or b) k3s [0].
> The former provides only kubelet running static pods, no API server et al. The 
> latter is a lightweight k8s distro (a 10MB memory footprint or so) and may be as 
> well used to spawn some very limited kubelet and API server setup for Metal3 to 
> drive the provisioning of overclouds outside of Heat and Neutron.

We could use Metal3, but it will definitely change user experience beyond the 
point of recognition and rule out upgrades. With the current effort we're trying 
to keep the user interactions similar and upgrades still possible.

Dmitry

> 
> [0] 
> https://www.cnrancher.com/blog/2019/2019-02-26-introducing-k3s-the-lightweight-kubernetes-distribution-built-for-the-edge/ 
> 
> 
>> `nova-less-deploy specification`_, as well as the current state
>> of the implementation, were presented.
>>
>> The current concerns are:
>>
>> * upgrades from a Nova based deployment (probably just wipe the Nova
>>    database),
>> * losing user experience of ``nova list`` (largely compensated by
>>    ``metalsmith list``),
>> * tracking IP addresses for networks other than *ctlplane* (solved the same
>>    way as for deployed servers).
>>
>> The next action item is to create a CI job based on the already merged code and
>> verify a few assumptions made above.
>>
<snip>



More information about the openstack-discuss mailing list