[Openstack-operators] [nova][neutron] What are your cells networking use cases?

Belmiro Moreira moreira.belmiro.email.lists at gmail.com
Fri Feb 26 11:21:06 UTC 2016


Hi,
thanks Carl for info about the DHCP plans.

Our DHCP concern is because currently the DHCP agent needs to be assigned
to a network and then it creates a port for each subnet.
In our infrastructure we only consider a network with several hundred
subnets.
By default the DHCP agent runs in the network node however when using
provider
networks and segmentation the DHCP requests will not reach it.

My understanding is that the plan is to have the DHCP agent per segment.
That is great.
But it will continue to create a port per subnet?
Looking into our usecase (only provided networks with segmentation) I don't
see why it
should create ports.

Belmiro

On Fri, Feb 26, 2016 at 2:55 AM, Sam Morrison <sorrison at gmail.com> wrote:

>
> > On 26 Feb 2016, at 9:20 AM, Carl Baldwin <carl at ecbaldwin.net> wrote:
> >
> > (resending with reply-all)
> >
> > The routed networks work will include a change to the DHCP scheduler
> > which will work something like this:
> >
> > 1. Neutron subnets will have optional affinity to a segment
> > 2. DHCP agents will (somewhat indirectly) report which segments to
> > which they are attached*.
> > 3. Where today, DHCP schedules networks to DHCP agents, tomorrow DHCP
> > will schedule each segment to an agent that can reach it.  This will
> > be predicated on 'enable_dhcp' being set on the subnets.
> >
> > There is an implicit assumption here that the operator will deploy a
> > DHCP agent in each of the segments.  This will be documented in the
> > guide.
>
> I assume you’re referring to https://review.openstack.org/#/c/205631/
> Really keen to get this in, we’re using this in prod and works well for us.
>
> Sam
>
>
>
>
> > Down the road, I really think we should continue to explore other
> > possibilities like DHCP relay or a DHCP responder on the compute host.
> > But, that should be considered an independent effort.
> >
> > Carl
> >
> > * they already do this by reporting physical_network in bridge mappings
> >
> > On Thu, Feb 25, 2016 at 11:30 AM, Tim Bell <Tim.Bell at cern.ch> wrote:
> >>
> >> The CERN guys had some concerns on how dhcp was working in a segment
> environment. I’ll leave them to give details.
> >>
> >> Tim
> >>
> >>
> >>
> >>
> >>
> >> On 25/02/16 14:53, "Andrew Laski" <andrew at lascii.com> wrote:
> >>
> >>>
> >>>
> >>> On Thu, Feb 25, 2016, at 05:01 AM, Tim Bell wrote:
> >>>>
> >>>> CERN info added.. Feel free to come back for more information if
> needed.
> >>>
> >>> An additional piece of information we're specifically interested in
> from
> >>> all cellsv1 deployments is around the networking control plane setup.
> Is
> >>> there a single nova-net/Neutron deployment per region that is shared
> >>> among cells? It appears that all cells users are splitting the network
> >>> data plane into clusters/segments, are similar things being done to the
> >>> control plane?
> >>>
> >>>
> >>>>
> >>>> Tim
> >>>>
> >>>>
> >>>>
> >>>>
> >>>> On 24/02/16 22:47, "Edgar Magana" <edgar.magana at workday.com> wrote:
> >>>>
> >>>>> It will be awesome if we can add this doc into the networking guide
> :-)
> >>>>>
> >>>>>
> >>>>> Edgar
> >>>>>
> >>>>>
> >>>>>
> >>>>>
> >>>>> On 2/24/16, 1:42 PM, "Matt Riedemann" <mriedem at linux.vnet.ibm.com>
> wrote:
> >>>>>
> >>>>>> The nova and neutron teams are trying to sort out existing
> deployment
> >>>>>> network scenarios for cells v1 so we can try and document some of
> that
> >>>>>> and get an idea if things change at all with cells v2.
> >>>>>>
> >>>>>> Therefore we're asking that deployers running cells please document
> >>>>>> anything you can in an etherpad [1].
> >>>>>>
> >>>>>> We'll try to distill that for upstream docs at some point and then
> use
> >>>>>> it as a reference when talking about cells v2 + networking.
> >>>>>>
> >>>>>> [1] https://etherpad.openstack.org/p/cells-networking-use-cases
> >>>>>>
> >>>>>> --
> >>>>>>
> >>>>>> Thanks,
> >>>>>>
> >>>>>> Matt Riedemann
> >>>>>>
> >>>>>>
> >>>>>> _______________________________________________
> >>>>>> OpenStack-operators mailing list
> >>>>>> OpenStack-operators at lists.openstack.org
> >>>>>>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> >>>>> _______________________________________________
> >>>>> OpenStack-operators mailing list
> >>>>> OpenStack-operators at lists.openstack.org
> >>>>>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> >>>> _______________________________________________
> >>>> OpenStack-operators mailing list
> >>>> OpenStack-operators at lists.openstack.org
> >>>>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> >>>> Email had 1 attachment:
> >>>> + smime.p7s
> >>>>  4k (application/pkcs7-signature)
> >>>
> >>> _______________________________________________
> >>> OpenStack-operators mailing list
> >>> OpenStack-operators at lists.openstack.org
> >>>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> >>
> >> _______________________________________________
> >> OpenStack-operators mailing list
> >> OpenStack-operators at lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> >>
> >
> > _______________________________________________
> > OpenStack-operators mailing list
> > OpenStack-operators at lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
> _______________________________________________
> OpenStack-operators mailing list
> OpenStack-operators at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20160226/01ddec23/attachment.html>


More information about the OpenStack-operators mailing list