[infra][neutron] Removing networking-calico from OpenStack governance

Neil Jerram neil at tigera.io
Wed Feb 12 18:03:37 UTC 2020


On Wed, Feb 12, 2020 at 4:52 PM David Comay <david.comay at gmail.com> wrote:

>
>
>>> My primary concern which isn't really governance would be around making
>>> sure the components in `networking-calico` are kept in-sync with the parent
>>> classes it inherits from Neutron itself. Is there a plan to keep these
>>> in-sync together going forward?
>>>
>>
>> Thanks for this question.  I think the answer is that it will be a
>> planned effort, from now on, for us to support new OpenStack versions.
>> From Kilo through to Rocky we have aimed (and managed, so far as I know) to
>> maintain a unified networking-calico codebase that works with all of those
>> versions.  However our code does not support Python 3, and OpenStack master
>> now requires Python 3, so we have to invest work in order to have even the
>> possibility of working with Train and later.  More generally, it has been
>> frustrating, over the last 2 years or so, to track OpenStack master as the
>> CI requires, because breaking changes (in other OpenStack code) are made
>> frequently and we get hit by them when trying to fix or enhance something
>> (typically unrelated) in networking-calico.
>>
>
> I don't know the history here around `calico-dhcp-agent` but has there
> been previous efforts to propose integrating the changes made to it into
> `neutron-dhcp-agent`? It seems the best solution would be to make the
> functionality provided by the former into the latter rather than relying on
> parent classes from the former. I suspect there are details here on why
> that might be difficult but it seems solving that would be helpful in the
> long-term.
>

No efforts that I know of.  The difference is that calico-dhcp-agent is
driven by information in the Calico etcd datastore, where
neutron-dhcp-agent is driven via a message queue from the Neutron server.
I think it has improved since, but when we originated calico-dhcp-agent a
few years ago, the message queue wasn't scaling very well to hundreds of
nodes.  We can certainly keep reintegrating in mind as a possibility.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-discuss/attachments/20200212/dfa54cc4/attachment-0001.html>


More information about the openstack-discuss mailing list