[ironic][ptg] Zed PTG Summary

Iury Gregory iurygregory at gmail.com
Mon Apr 25 14:42:37 UTC 2022


Hello Ironicers!

Sorry for the delay to provide a summary about our PTG =)

First of all, thank you to all contributors that took some time to join our
sessions!
During the sessions we had a peak of 13 attendees on the last three days.

Day1
We only had two topics: Feedback about the Yoga Cycle and What should we do
about specs.
During the of we did a retrospective about the Yoga Cycle, we discussed the
good and bad things that happened during the cycle and how we can make
things better. In the discussion about what we should do about specs we
come up with some ideas to improve our process when we think a spec should
be required.

Day2:
On this day most of the discussions were around topics that can be
managing machines. How we can decrease the cost of Data Centers that needs
to keep all their machine on even if they are not active or taking minimal
load, we would provide a way to power tuning nodes that are already
deployed (in the ironic perspective this is a way to reconfigure a node
that is already active), we also talked about turning down power usage if
we are able to identify that IPA is idle.
The custom deploy timeout topic had no strong objections, since we have a
single configuration that handles the timeout for all steps we think we can
improve this, we still need to define some details related to the
implementation after we have the RFE.
We discussed how we can generate the inspector.ipxe configuration if we
notice some changes in the configuration of the deployment.

Day3:
We started this day looking at the survey results, this gave us some ideas
on possible areas we should improve.  The ironic safeguard topic was
focusing on two main things: queuing (limit the max number of concurrent
cleaning operations) and data disk protection (only clean the root disk (or
give a list of skip-disks/disks-to-clean). The community decided that this
seems like a good idea and we defined some of the possible paths forwards
related to how it could be implemented.
The redfish gateway and related ideas topic brought an interesting idea to
have an ironic driver that can execute a "script" that is required for
their HW to start working properly, there is a recording from the first
meeting that can provide more details =).
The per-node clean steps topic had no objections, we think this will
improve the operators experience for scenarios where they would need
specific steps for running on a node that has different disk configuration.


Day4:
Most of the topics on this day were focused on networking. We discussed the
status of OVN DHCP support and how we will move forward on our side to make
things work after Neutron and OVN have all the necessary bits in place.
During the next topic we talked about adding device configuration
capabilities for networking-baremetal since in multi-tenant BMaaS there is
a need to configure the ToR network devices (Access/Edge Switches) and many
vendors have abandoned their ML2 mechanism plug-ins to support this, so now
we are looking to add support for new mechanisms that have more features
that could improve the operators experience. We focused on discussing the
pros/cons about Netconf and Yang but also other alternatives solutions.
The other topic we discussed were:
-  netboot deprecation: we discussed how we should move forward with some
of our testing for partition images + UEFI.
- Bluefield DPU: not much discussion since we didn't have many folks
interested on the topic.
- Anaconda driver: we talked about how we can get a CI for testing the
driver.


You can find more information about the topics and the discussions in our
etherpad: https://etherpad.opendev.org/p/ironic-zed-ptg

-- 


*Att[]'sIury Gregory Melo Ferreira *
*MSc in Computer Science at UFCG*
*Part of the ironic-core and puppet-manager-core team in OpenStack*
*Software Engineer at Red Hat Czech*
*Social*: https://www.linkedin.com/in/iurygregory
*E-mail:  iurygregory at gmail.com <iurygregory at gmail.com>*
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-discuss/attachments/20220425/24b7ae78/attachment.htm>


More information about the openstack-discuss mailing list