[openstack-dev] [Ironic] Changes to Ramdisk and iPXE defaults in Devstack and many gate jobs

Jay Faulkner jay at jvf.cc
Thu May 12 15:54:29 UTC 2016


Hi all,


A change (https://review.openstack.org/#/c/313035/) to Ironic devstack is in the gate, changing the default ironic-python-agent (IPA) ramdisk from CoreOS to TinyIPA, and changing iPXE to default enabled.


As part of the work to improve and speed up gate jobs, we determined that using iPXE speeds up deployments and makes them more reliable by using http to transfer ramdisks instead of tftp. Additionally, the TinyIPA image, in development over the last few months, uses less ram and is smaller, allowing faster transfers and more simultaneous VMs to run in the gate.


In addition to changing the devstack default, there's also a patch up: https://review.openstack.org/#/c/313800/ to change most Ironic jobs to use iPXE and TinyIPA. This change will make IPA have voting check jobs and tarball publishing jobs for supported ramdisks (CoreOS and TinyIPA). Ironic (and any other projects other than IPA) will use the publicly published tinyipa image.


In summary:

- Devstack changes (merging now):

  - Defaults to TinyIPA ramdisk

  - Defaults to iPXE enabled

- Gate changes (needs review at: https://review.openstack.org/#/c/313800/ )

  - Ironic-Python-Agent

    - Voting CoreOS + TinyIPA source (ramdisk built on the fly jobs)

  - Ironic

    - Change all jobs (except bash ramdisk pxe_ssh job) to TinyIPA

    - Change all jobs but one to use iPXE

    - Change all gate jobs to use 512mb of ram


If there are any questions or concerns, feel free to ask here or in #openstack-ironic.


P.S. I welcome users of the DIB ramdisk to help make a job to run against IPA. All supported ramdisks should be checked in IPA's gate to avoid breakage as IPA is inherently dependent on its environment.



Thanks,

Jay Faulkner (JayF)

OSIC
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20160512/a2ede1e3/attachment.html>


More information about the OpenStack-dev mailing list