[openstack-dev] [Ironic][Neutron] - Integration with neutron using external attachment point

Igor Cardoso igordcard at gmail.com
Wed May 21 23:49:39 UTC 2014


Ahikiro,
>Thanks for the comment. We already know them as I commented
>in the Summit session and ML2 weekly meeting.
I'm interested in taking a look at the respective ML2 meeting.
I haven't been able to find anything down to the beginning of March...

Kevin,
>Have you had a chance to look at the details of our blueprint?
Yes, I've fully read it.
>Are there any workflows supported by yours that we forgot?
Not really. Your proposal is more complete and goes towards what I had
in mind as well, especially regarding the L2 Gateway use case.
>We would be happy to have you help on the reference implementation for
this.
You can count on me ;)

I'm sure we are on the right path for having a very robust and generic way
to achieve heterogeneous bare metal networking working on Neutron.


On 21 May 2014 18:14, Stig Telfer <stelfer at cray.com> wrote:

>  Our team here has been looking at something closely related that may be
> of interest.  There seems to be good scope for collaboration.
>
>
>
> Here’s our proposal, which includes support for bare metal networking with
> the VLAN mechanism driver:
>
>
>
> https://blueprints.launchpad.net/neutron/+spec/ml2-mechanism-snmp-vlan
>
>
>
> Our project is a point solution at your step 6.  The rest of the workflow
> looks complimentary and solves the unanswered questions in our bp
> proposal.  As indeed would the neutron-external-ports spec.
>
>
>
> Best wishes,
>
> Stig Telfer
>
> Cray Inc.
>
>
>
>
>
> *From:* Russell Haering [mailto:russellhaering at gmail.com]
> *Sent:* Tuesday, May 20, 2014 11:01 PM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [Ironic][Neutron] - Integration with
> neutron using external attachment point
>
>
>
> We've been experimenting some with how to use Neutron with Ironic here at
> Rackspace.
>
>
>
> Our very experimental code:
> https://github.com/rackerlabs/ironic-neutron-plugin
>
>
>
> Our objective is the same as what you're describing, to allow Nova servers
> backed by Ironic to attach to arbitrary Neutron networks. We're initially
> targeting VLAN-based networks only, but eventually want to do VXLAN from
> the top-of-rack switches, controlled via an SDN controller.
>
>
>
> Our approach is a little different than what you're describing though. Our
> objective is to modify the existing Nova -> Neutron interaction as little
> as possible, which means approaching the problem by thinking "how would an
> L2 agent do this?".
>
>
>
> The workflow looks something like:
>
>
>
> 1. Nova calls Neutron to create a virtual "port". Because this happens
> _before_ Nova touches the virt driver, the port is at this point identical
> to one created for a virtual server.
>
> 2. Nova executes the "spawn" method of the Ironic virt driver, which makes
> some calls to Ironic.
>
> 3. Inside Ironic, we know about the physical switch ports that the
> selected Node is connected to. This information is discovered early-on
> using LLDP and stored in the Ironic database.
>
> 4. We actually need the node to remain on an internal provisioning VLAN
> for most of the provisioning process, but once we're done with on-host work
> we turn the server off.
>
> 5. Ironic deletes a Neutron port that was created at bootstrap time to
> trunk the physical switch ports for provisioning.
>
> 6. Ironic updates each of the customer's Neutron ports with information
> about its physical switch port.
>
> 6. Our Neutron extension configures the switches accordingly.
>
> 7. Then Ironic brings the server back up.
>
>
>
> The destroy process basically does the reverse. Ironic removes the
> physical switch mapping from the Neutron ports, re-creates an internal
> trunked port, does some work to tear down the server, then passes control
> back to Nova. At that point Nova can do what it wants with the Neutron
> ports. Hypothetically that could include allocating them to a different
> Ironic Node, etc, although in practice it just deletes them.
>
>
>
> Again, this is all very experimental in nature, but it seems to work
> fairly well for the use-cases we've considered. We'd love to find a way to
> collaborate with others working on similar problems.
>
>
>
> Thanks,
>
> Russell
>
>
>
> On Tue, May 20, 2014 at 7:17 AM, Akihiro Motoki <amotoki at gmail.com> wrote:
>
> # Added [Neutron] tag as well.
>
> Hi Igor,
>
> Thanks for the comment. We already know them as I commented
> in the Summit session and ML2 weekly meeting.
> Kevin's blueprint now covers Ironic integration and layer2 network gateway
> and I believe "campus-network" blueprint will be covered.
>
> We think the work can be split into generic API definition and
> implementations
> (including ML2). In "external attachment point" blueprint review, API
> and generic topics are mainly discussed so far and the detail
> implementation is not discussed
> so much yet. ML2 implementation detail can be discussed later
> (separately or as a part of the blueprint review).
>
> I am not sure what changes proposed in Blueprint [1].
> AFAIK SDN/OpenFlow controller based approach can support this,
> but how can we archive this in the existing open source implementation.
> I am also interested in the ML2 implementation detail.
>
> Anyway more input will be appreciated.
>
> Thanks,
> Akihiro
>
>
> On Tue, May 20, 2014 at 7:13 PM, Igor Cardoso <igordcard at gmail.com> wrote:
> > Hello Kevin.
> > There is a similar Neutron blueprint [1], originally meant for Havana but
> > now aiming for Juno.
> > I would be happy to join efforts with you regarding our blueprints.
> > See also: [2].
> >
> > [1] https://blueprints.launchpad.net/neutron/+spec/ml2-external-port
> > [2] https://blueprints.launchpad.net/neutron/+spec/campus-network
> >
> >
> > On 19 May 2014 23:52, Kevin Benton <blak111 at gmail.com> wrote:
> >>
> >> Hello,
> >>
> >> I am working on an extension for neutron to allow external attachment
> >> point information to be stored and used by backend plugins/drivers to
> place
> >> switch ports into neutron networks[1].
> >>
> >> One of the primary use cases is to integrate ironic with neutron. The
> >> basic workflow is that ironic will create the external attachment points
> >> when servers are initially installed. This step could either be
> automated
> >> (extract switch-ID and port number of LLDP message) or it could be
> manually
> >> performed by an admin who notes the ports a server is plugged into.
> >>
> >> Then when an instance is chosen for assignment and the neutron port
> needs
> >> to be created, the creation request would reference the corresponding
> >> attachment ID and neutron would configure the physical switch port to
> place
> >> the port on the appropriate neutron network.
> >>
> >> If this workflow won't work for Ironic, please respond to this email or
> >> leave comments on the blueprint review.
> >>
> >> 1. https://review.openstack.org/#/c/87825/
> >>
> >>
> >> Thanks
> >> --
> >> Kevin Benton
> >>
> >> _______________________________________________
> >> OpenStack-dev mailing list
> >> OpenStack-dev at lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >
> >
> >
> > --
> > Igor Duarte Cardoso.
> > http://igordcard.blogspot.com
> >
> > _______________________________________________
> > OpenStack-dev mailing list
> > OpenStack-dev at lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Igor Duarte Cardoso.
http://igordcard.blogspot.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140522/26376ef3/attachment.html>


More information about the OpenStack-dev mailing list