[openstack-dev] [ironic workflow question]

Dmitry Tantsur dtantsur at redhat.com
Wed Jun 4 13:27:06 UTC 2014


On Wed, 2014-06-04 at 21:18 +0800, 严超 wrote:
> Thank you !
> 
> I noticed the two sets of k+r in tftp configuration of ironic.
> 
> Should the two sets be the same k+r ?
Deploy images are created for you by DevStack/whatever. If you do it by
hand, you may use diskimage-builder. Currently they are stored in flavor
metadata, will be stored in node metadata later.

And than you have "production" images that are whatever you want to
deploy and they are stored in Glance metadata for the instance image.

TFTP configuration should be created automatically, I doubt you should
change it anyway.

> 
> The first set is defined in the ironic node definition. 
> 
> How do we define the second set correctly ? 
> 
> Best Regards!
> Chao Yan
> --------------
> My twitter:Andy Yan @yanchao727
> My Weibo:http://weibo.com/herewearenow
> --------------
> 
> 
> 
> 2014-06-04 21:00 GMT+08:00 Dmitry Tantsur <dtantsur at redhat.com>:
>         On Wed, 2014-06-04 at 20:29 +0800, 严超 wrote:
>         > Hi,
>         >
>         > Thank you very much for your reply !
>         >
>         > But there are still some questions for me. Now I've come to
>         the step
>         > where ironic partitions the disk as you replied.
>         >
>         > Then, how does ironic copies an image ? I know the image
>         comes from
>         > glance. But how to know image is really available when
>         reboot?
>         
>         I don't quite understand your question, what do you mean by
>         "available"?
>         Anyway, before deploying Ironic downloads image from Glance,
>         caches it
>         and just copies to a mounted iSCSI partition (using dd or so).
>         
>         >
>         > And, what are the differences between final kernel (ramdisk)
>         and
>         > original kernel (ramdisk) ?
>         
>         We have 2 sets of kernel+ramdisk:
>         1. Deploy k+r: these are used only for deploy process itself
>         to provide
>         iSCSI volume and call back to Ironic. There's ongoing effort
>         to create
>         smarted ramdisk, called Ironic Python Agent, but it's WIP.
>         2. Your k+r as stated in Glance metadata for an image - they
>         will be
>         used for booting after deployment.
>         
>         >
>         > Best Regards!
>         > Chao Yan
>         > --------------
>         > My twitter:Andy Yan @yanchao727
>         > My Weibo:http://weibo.com/herewearenow
>         > --------------
>         >
>         >
>         >
>         > 2014-06-04 19:36 GMT+08:00 Dmitry Tantsur
>         <dtantsur at redhat.com>:
>         >         Hi!
>         >
>         >         Workflow is not entirely documented by now AFAIK.
>         After PXE
>         >         boots deploy
>         >         kernel and ramdisk, it exposes hard drive via iSCSI
>         and
>         >         notifies Ironic.
>         >         After that Ironic partitions the disk, copies an
>         image and
>         >         reboots node
>         >         with final kernel and ramdisk.
>         >
>         >         On Wed, 2014-06-04 at 19:20 +0800, 严超 wrote:
>         >         > Hi, All:
>         >         >
>         >         >         I searched a lot about how ironic
>         automatically
>         >         install image
>         >         > on bare metal. But there seems to be no clear
>         workflow out
>         >         there.
>         >         >
>         >         >         What I know is, in traditional PXE, a bare
>         metal
>         >         pull image
>         >         > from PXE server using tftp. In tftp root, there is
>         a ks.conf
>         >         which
>         >         > tells tftp which image to kick start.
>         >         >
>         >         >         But in ironic there is no ks.conf pointed
>         in tftp.
>         >         How do bare
>         >         > metal know which image to install ? Is there any
>         clear
>         >         workflow where
>         >         > I can read ?
>         >         >
>         >         >
>         >         >
>         >         >
>         >         > Best Regards!
>         >         > Chao Yan
>         >         > --------------
>         >         > My twitter:Andy Yan @yanchao727
>         >         > My Weibo:http://weibo.com/herewearenow
>         >         > --------------
>         >         >
>         >
>         >         > _______________________________________________
>         >         > OpenStack-dev mailing list
>         >         > OpenStack-dev at lists.openstack.org
>         >         >
>         >
>         http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>         >
>         >
>         >
>         >         _______________________________________________
>         >         OpenStack-dev mailing list
>         >         OpenStack-dev at lists.openstack.org
>         >
>         http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>         >
>         >
>         > _______________________________________________
>         > OpenStack-dev mailing list
>         > OpenStack-dev at lists.openstack.org
>         >
>         http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>         
>         
>         
>         _______________________________________________
>         OpenStack-dev mailing list
>         OpenStack-dev at lists.openstack.org
>         http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>         
> 
> 
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





More information about the OpenStack-dev mailing list