<div dir="ltr">Hi Devananda,<div><br></div><div>Most of this should work fine. The only problem part is handling the servers that are first being booted and have never been connected to Ironic. Neutron doesn't have control over the default network that all un-provisioned switch ports should be a member of. Even if we added support for this, the management network that you would likely want them to be on is normally a network not known to Neutron. </div>
<div><br></div><div>For that workflow to work, I think the switch-ports should be manually configured to be in the management VLAN by default. Then the servers will be able to boot up and receive their PXE image from ironic, etc. Once Ironic will create an external attachment point using the information learned from LLDP. It's then up to the backend implementation to assure that when that external attachment point isn't associated to a specific neutron network that it will be in the default network it was configured in to begin with. </div>
<div><br></div><div>The workflow would then be:</div><div>1. Admin puts all switch ports that might have Ironic servers plugged into them into the management network.</div><div>2. A new Ironic server is plugged in and successfully boots to management network and learns it's switch ID/port from LLDP.</div>
<div>3. The Ironic management server makes a call to Neutron to create an external attachment point using the switch ID/port received from the new server.</div><div>4. When the server is being assigned to a tenant, Ironic passes the external attachment ID to Nova, which adds it to the neutron port creation request.</div>
<div>5. Neutron will then assign the external attachment point to the network in the port creation request, at which point the backend will be triggered to configure the switch-port for appropriate VLAN access, etc.</div>
<div>6. Once the server is terminated, Ironic will remove the network ID from the external attachment point, which will instruct the Neutron backend to return the port to the default VLAN it was in before. In this case it would be the management VLAN and it would be back on the appropriate network for provisioning again.</div>
<div><br></div><div>Does that make sense?</div><div><br></div><div>Thanks,</div><div>Kevin Benton</div><div><br></div></div><div class="gmail_extra"><br><br><div class="gmail_quote">On Tue, May 20, 2014 at 9:48 AM, Devananda van der Veen <span dir="ltr"><<a href="mailto:devananda.vdv@gmail.com" target="_blank">devananda.vdv@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Hi Kevin!<div><br></div><div>I had a few conversations with folks at the summit regarding this. Broadly speaking, yes -- this integration would be very helpful for both discovery and network/tenant isolation at the bare metal layer.</div>
<div><br></div><div>I've left a few comments inline....<br>
<div class="gmail_extra"><br><br><div class="gmail_quote"><div class="">On Mon, May 19, 2014 at 3:52 PM, Kevin Benton <span dir="ltr"><<a href="mailto:blak111@gmail.com" target="_blank">blak111@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div dir="ltr">Hello,<div><br></div><div>I am working on an extension for neutron to allow external attachment point information to be stored and used by backend plugins/drivers to place switch ports into neutron networks[1]. </div>
<div><br></div><div>One of the primary use cases is to integrate ironic with neutron. The basic workflow is that ironic will create the external attachment points when servers are initially installed. </div></div></blockquote>
<div><br></div></div><div>This also should account for servers that are already racked, which Ironic is instructed to manage. These servers would be booted into a discovery state, eg. running ironic-python-agent, and hardware information (inventory, LLDP data, etc) could be sent back to Ironic.</div>
<div><br></div><div>To do this, nodes not yet registered with Ironic will need to be PXE booted on a common management LAN (either untagged VLAN or a specific management VLAN), which can route HTTP(S) and TFTP traffic to an instance of ironic-api and ironic-conductor services. How will the routing be done by Neutron for unknown ports?</div>
<div class="">
<div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div dir="ltr"><div>This step could either be automated (extract switch-ID and port number of LLDP message) or it could be manually performed by an admin who notes the ports a server is plugged into. </div></div></blockquote>
<div><br></div></div><div>Ironic could extract info from LLDP if the machine has booted into the ironic-python-agent ramdisk and is able to communicate with Ironic services. So it needs to be networked /before/ it's enrolled with Ironic. If that's possible -- great. I believe this is the workflow that the IPA team intends to follow.</div>
<div><br></div><div>Setting it manually should also, of course, be possible, but less manageable with large numbers of servers.</div><div class=""><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div dir="ltr">
<div><br></div><div>Then when an instance is chosen for assignment and the neutron port needs to be created, the creation request would reference the corresponding attachment ID and neutron would configure the physical switch port to place the port on the appropriate neutron network.</div>
</div></blockquote><div><br></div></div><div>Implementation question here -- today, Nova does the network attachment for instances (or at least, Nova initiates the calls out to Neutron). Ironic can expose this information to Nova and allow Nova to coordinate with Neutron, or Ironic can simply call out to Neutron, as it does today when setting the dhcp extra options. I'm not sure which approach is better.</div>
<div> </div><div><br></div><div>Cheers,</div><div>Devananda</div></div></div></div></div>
<br>_______________________________________________<br>
OpenStack-dev mailing list<br>
<a href="mailto:OpenStack-dev@lists.openstack.org">OpenStack-dev@lists.openstack.org</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
<br></blockquote></div><br><br clear="all"><div><br></div>-- <br><div>Kevin Benton</div>
</div>