[largescale-sig][neutron] What driver are you using?

Arnaud Morin arnaud.morin at gmail.com
Mon May 17 14:59:33 UTC 2021


Hi Laurent,

Thanks for your reply!
I agree that it depends on the scale usage.
About the VLAN you are using for external networks, do you have/want to
share the number of public IP you have in this L2 for a region?

Cheers,

On 11.05.21 - 19:21, Laurent Dumont wrote:
> I feel like it depends a lot on the scale/target usage (public vs private
> cloud).
> 
> But at $dayjob, we are leveraging
> 
>    - vlans for external networking (linux-bridge + OVS)
>    - vxlans for internal Openstack networks.
> 
> We like the simplicity of vxlan with minimal overlay configuration. There
> are some scaling/performance issues with stuff like l2 population.
> 
> VLANs are okay but it's hard to predict the next 5 years of growth.
> 
> On Mon, May 10, 2021 at 8:34 AM Arnaud Morin <arnaud.morin at gmail.com> wrote:
> 
> > Hey large-scalers,
> >
> > We had a discusion in my company (OVH) about neutron drivers.
> > We are using a custom driver based on BGP for public networking, and
> > another custom driver for private networking (based on vlan).
> >
> > Benefits from this are obvious:
> > - we maintain the code
> > - we do what we want, not more, not less
> > - it fits perfectly to the network layer our company is using
> > - we have full control of the networking stack
> >
> > But it also have some downsides:
> > - we have to maintain the code... (rebasing, etc.)
> > - we introduce bugs that are not upstream (more code, more bugs)
> > - a change in code is taking longer, we have few people working on this
> >   (compared to a community based)
> > - this is not upstream (so not opensource)
> > - we are not sharing (bad)
> >
> > So, we were wondering which drivers are used upstream in large scale
> > environment (not sure a vlan driver can be used with more than 500
> > hypervisors / I dont know about vxlan or any other solution).
> >
> > Is there anyone willing to share this info?
> >
> > Thanks in advance!
> >
> >



More information about the openstack-discuss mailing list