[openstack-dev] [Ironic][Neutron] - Integration with neutron using external attachment point

Kevin Benton blak111 at gmail.com
Wed May 21 00:26:26 UTC 2014


Hi Russell,

Thanks for sharing this. I introduced this as an extension so it can
hopefully be used by ML2 and any other plugins by including the mixin.

I have a couple of questions about the workflow you described:

>1. Nova calls Neutron to create a virtual "port". Because this happens
_before_ Nova touches the virt driver, the port is at this point identical
to one created for a virtual server.
>6. Ironic updates each of the customer's Neutron ports with information
about its physical switch port.

To reduce API calls did you look to see if it was possible to wait to
create the neutron port until the information from Ironic was available? Or
is port creation long before ironic is called?

>5. Ironic deletes a Neutron port that was created at bootstrap time to
trunk the physical switch ports for provisioning.

What is the process for creating this port in the first place? Is your
management network that is used to provision Ironic instances known to
Neutron?

Thanks,
Kevin Benton



On Tue, May 20, 2014 at 3:01 PM, Russell Haering
<russellhaering at gmail.com>wrote:

> We've been experimenting some with how to use Neutron with Ironic here at
> Rackspace.
>
> Our very experimental code:
> https://github.com/rackerlabs/ironic-neutron-plugin
>
> Our objective is the same as what you're describing, to allow Nova servers
> backed by Ironic to attach to arbitrary Neutron networks. We're initially
> targeting VLAN-based networks only, but eventually want to do VXLAN from
> the top-of-rack switches, controlled via an SDN controller.
>
> Our approach is a little different than what you're describing though. Our
> objective is to modify the existing Nova -> Neutron interaction as little
> as possible, which means approaching the problem by thinking "how would an
> L2 agent do this?".
>
> The workflow looks something like:
>
> 1. Nova calls Neutron to create a virtual "port". Because this happens
> _before_ Nova touches the virt driver, the port is at this point identical
> to one created for a virtual server.
> 2. Nova executes the "spawn" method of the Ironic virt driver, which makes
> some calls to Ironic.
> 3. Inside Ironic, we know about the physical switch ports that the
> selected Node is connected to. This information is discovered early-on
> using LLDP and stored in the Ironic database.
> 4. We actually need the node to remain on an internal provisioning VLAN
> for most of the provisioning process, but once we're done with on-host work
> we turn the server off.
> 5. Ironic deletes a Neutron port that was created at bootstrap time to
> trunk the physical switch ports for provisioning.
> 6. Ironic updates each of the customer's Neutron ports with information
> about its physical switch port.
> 6. Our Neutron extension configures the switches accordingly.
> 7. Then Ironic brings the server back up.
>
> The destroy process basically does the reverse. Ironic removes the
> physical switch mapping from the Neutron ports, re-creates an internal
> trunked port, does some work to tear down the server, then passes control
> back to Nova. At that point Nova can do what it wants with the Neutron
> ports. Hypothetically that could include allocating them to a different
> Ironic Node, etc, although in practice it just deletes them.
>
> Again, this is all very experimental in nature, but it seems to work
> fairly well for the use-cases we've considered. We'd love to find a way to
> collaborate with others working on similar problems.
>
> Thanks,
> Russell
>
>
> On Tue, May 20, 2014 at 7:17 AM, Akihiro Motoki <amotoki at gmail.com> wrote:
>
>> # Added [Neutron] tag as well.
>>
>> Hi Igor,
>>
>> Thanks for the comment. We already know them as I commented
>> in the Summit session and ML2 weekly meeting.
>> Kevin's blueprint now covers Ironic integration and layer2 network gateway
>> and I believe "campus-network" blueprint will be covered.
>>
>> We think the work can be split into generic API definition and
>> implementations
>> (including ML2). In "external attachment point" blueprint review, API
>> and generic topics are mainly discussed so far and the detail
>> implementation is not discussed
>> so much yet. ML2 implementation detail can be discussed later
>> (separately or as a part of the blueprint review).
>>
>> I am not sure what changes proposed in Blueprint [1].
>> AFAIK SDN/OpenFlow controller based approach can support this,
>> but how can we archive this in the existing open source implementation.
>> I am also interested in the ML2 implementation detail.
>>
>> Anyway more input will be appreciated.
>>
>> Thanks,
>> Akihiro
>>
>> On Tue, May 20, 2014 at 7:13 PM, Igor Cardoso <igordcard at gmail.com>
>> wrote:
>> > Hello Kevin.
>> > There is a similar Neutron blueprint [1], originally meant for Havana
>> but
>> > now aiming for Juno.
>> > I would be happy to join efforts with you regarding our blueprints.
>> > See also: [2].
>> >
>> > [1] https://blueprints.launchpad.net/neutron/+spec/ml2-external-port
>> > [2] https://blueprints.launchpad.net/neutron/+spec/campus-network
>> >
>> >
>> > On 19 May 2014 23:52, Kevin Benton <blak111 at gmail.com> wrote:
>> >>
>> >> Hello,
>> >>
>> >> I am working on an extension for neutron to allow external attachment
>> >> point information to be stored and used by backend plugins/drivers to
>> place
>> >> switch ports into neutron networks[1].
>> >>
>> >> One of the primary use cases is to integrate ironic with neutron. The
>> >> basic workflow is that ironic will create the external attachment
>> points
>> >> when servers are initially installed. This step could either be
>> automated
>> >> (extract switch-ID and port number of LLDP message) or it could be
>> manually
>> >> performed by an admin who notes the ports a server is plugged into.
>> >>
>> >> Then when an instance is chosen for assignment and the neutron port
>> needs
>> >> to be created, the creation request would reference the corresponding
>> >> attachment ID and neutron would configure the physical switch port to
>> place
>> >> the port on the appropriate neutron network.
>> >>
>> >> If this workflow won't work for Ironic, please respond to this email or
>> >> leave comments on the blueprint review.
>> >>
>> >> 1. https://review.openstack.org/#/c/87825/
>> >>
>> >>
>> >> Thanks
>> >> --
>> >> Kevin Benton
>> >>
>> >> _______________________________________________
>> >> OpenStack-dev mailing list
>> >> OpenStack-dev at lists.openstack.org
>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >>
>> >
>> >
>> >
>> > --
>> > Igor Duarte Cardoso.
>> > http://igordcard.blogspot.com
>> >
>> > _______________________________________________
>> > OpenStack-dev mailing list
>> > OpenStack-dev at lists.openstack.org
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Kevin Benton
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140520/9b6e81f0/attachment.html>


More information about the OpenStack-dev mailing list