[openstack-dev] [Openstack-operators] The state of nova-network to neutron migration

Oleg Bondarev obondarev at mirantis.com
Wed Dec 24 09:07:45 UTC 2014


On Mon, Dec 22, 2014 at 10:08 PM, Anita Kuno <anteaya at anteaya.info> wrote:
>
> On 12/22/2014 01:32 PM, Joe Gordon wrote:
> > On Fri, Dec 19, 2014 at 9:28 AM, Kyle Mestery <mestery at mestery.com>
> wrote:
> >
> >> On Fri, Dec 19, 2014 at 10:59 AM, Anita Kuno <anteaya at anteaya.info>
> wrote:
> >>>
> >>> Rather than waste your time making excuses let me state where we are
> and
> >>> where I would like to get to, also sharing my thoughts about how you
> can
> >>> get involved if you want to see this happen as badly as I have been
> told
> >>> you do.
> >>>
> >>> Where we are:
> >>>     * a great deal of foundation work has been accomplished to achieve
> >>> parity with nova-network and neutron to the extent that those involved
> >>> are ready for migration plans to be formulated and be put in place
> >>>     * a summit session happened with notes and intentions[0]
> >>>     * people took responsibility and promptly got swamped with other
> >>> responsibilities
> >>>     * spec deadlines arose and in neutron's case have passed
> >>>     * currently a neutron spec [1] is a work in progress (and it needs
> >>> significant work still) and a nova spec is required and doesn't have a
> >>> first draft or a champion
> >>>
> >>> Where I would like to get to:
> >>>     * I need people in addition to Oleg Bondarev to be available to
> help
> >>> come up with ideas and words to describe them to create the specs in a
> >>> very short amount of time (Oleg is doing great work and is a fabulous
> >>> person, yay Oleg, he just can't do this alone)
> >>>     * specifically I need a contact on the nova side of this complex
> >>> problem, similar to Oleg on the neutron side
> >>>     * we need to have a way for people involved with this effort to
> find
> >>> each other, talk to each other and track progress
> >>>     * we need to have representation at both nova and neutron weekly
> >>> meetings to communicate status and needs
> >>>
> >>> We are at K-2 and our current status is insufficient to expect this
> work
> >>> will be accomplished by the end of K-3. I will be championing this
> work,
> >>> in whatever state, so at least it doesn't fall off the map. If you
> would
> >>> like to help this effort please get in contact. I will be thinking of
> >>> ways to further this work and will be communicating to those who
> >>> identify as affected by these decisions in the most effective methods
> of
> >>> which I am capable.
> >>>
> >>> Thank you to all who have gotten us as far as well have gotten in this
> >>> effort, it has been a long haul and you have all done great work. Let's
> >>> keep going and finish this.
> >>>
> >>> Thank you,
> >>> Anita.
> >>>
> >>> Thank you for volunteering to drive this effort Anita, I am very happy
> >> about this. I support you 100%.
> >>
> >> I'd like to point out that we really need a point of contact on the nova
> >> side, similar to Oleg on the Neutron side. IMHO, this is step 1 here to
> >> continue moving this forward.
> >>
> >
> > At the summit the nova team marked the nova-network to neutron migration
> as
> > a priority [0], so we are collectively interested in seeing this happen
> and
> > want to help in any way possible.   With regard to a nova point of
> contact,
> > anyone in nova-specs-core should work, that way we can cover more time
> > zones.
> >
> > From what I can gather the first step is to finish fleshing out the first
> > spec [1], and it sounds like it would be good to get a few nova-cores
> > reviewing it as well.
> >
> >
> >
> >
> > [0]
> >
> http://specs.openstack.org/openstack/nova-specs/priorities/kilo-priorities.html
> > [1] https://review.openstack.org/#/c/142456/
> >
> >
> Wonderful, thank you for the support Joe.
>
> It appears that we need to have a regular weekly meeting to track
> progress in an archived manner.
>
> I know there was one meeting November but I don't know what it was
> called so so far I can't find the logs for that.
>

It wasn't official, we just gathered together on #novamigration. Attaching
the log here.


> So if those affected by this issue can identify what time (UTC please,
> don't tell me what time zone you are in it is too hard to guess what UTC
> time you are available) and day of the week you are available for a
> meeting I'll create one and we can start talking to each other.
>
> I need to avoid Monday 1500 and 2100 UTC, Tuesday 0800 UTC, 1400 UTC and
> 1900 - 2200 UTC, Wednesdays 1500 - 1700 UTC, Thursdays 1400 and 2100 UTC.
>

I'm available each weekday 0700-1600 UTC, 1700-1800 UTC is also acceptable.

Thanks,
Oleg


>
> Thanks,
> Anita.
>
> >>
> >> Thanks,
> >> Kyle
> >>
> >>
> >>> [0] https://etherpad.openstack.org/p/kilo-nova-nova-network-to-neutron
> >>> [1] https://review.openstack.org/#/c/142456/
> >>>
> >>> _______________________________________________
> >>> OpenStack-operators mailing list
> >>> OpenStack-operators at lists.openstack.org
> >>>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> >>>
> >>
> >> _______________________________________________
> >> OpenStack-dev mailing list
> >> OpenStack-dev at lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >>
> >
>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20141224/1e60c065/attachment.html>
-------------- next part --------------
<markmcclain> hi all
-*- marios_ lurking - thanks for the cc Oleg
<obondarev> hi
<obondarev> so I've created an etherpad
<obondarev> https://etherpad.openstack.org/p/neutron-migration-discussion
<obondarev> may be useful to record some thoughts
--> belmoreira (~belmoreir at pb-d-128-141-237-209.cern.ch) has joined #novamigration
<belmoreira> hi
--> jlibosva (~Adium at ip4-95-82-156-85.cust.nbox.cz) has joined #novamigration
<jlibosva> hi
<obondarev> belmoreira, jlibosva: https://etherpad.openstack.org/p/neutron-migration-discussion
<markmcclain> obondarev: thanks for filling out the etherpad
<obondarev> so I think we're here to discuss proposed migration path in a bit more details
<belmoreira> I just fad a meeting discussing our internal migration to neutron :)
<obondarev> belmoreira: cool)
<markmcclain> belmoreira: I guess we have good timing
<markmcclain> obondarev: right� so should we walk through the steps?
<obondarev> markmcclain: yeah. I've put some questions to the etherpad, we may go through them to drive the discussion
<belmoreira> obondarev: I add a question about how iptables will be handled when moving to neutron
<obondarev> belmoreira: good question, thanks
<markmcclain> so starting with Step 0
--> josecastroleon (~josecastr at pcitis153.cern.ch) has joined #novamigration
<markmcclain> the idea of the reverse proxy to Nova is to ensure that we have a single source of truth for L3 information
<obondarev> should it be some new special neutron plugin?
<-- belmoreira (~belmoreir at pb-d-128-141-237-209.cern.ch) has quit (Max SendQ exceeded)
<markmcclain> initially I thought that would be approach, but dansmith had a different suggestion which might be easier
--> belmoreira (~belmoreir at pb-d-128-141-28-112.cern.ch) has joined #novamigration
<obondarev> oh, interesting
<markmcclain> also once we started discussing pluggable IPAM in neutron might not need to write a full proxy
<obondarev> yeah, the only issue is we don't know when pluggable IPAM will be there I'm afraid
<markmcclain> right� but it might be easier than writing a full reverse proxy
<obondarev> agree
<markmcclain> even with pluggable IPAM we would still need to handle case of security groups
<belmoreira> I'm thinking in our infrastructure migration and I still don't think that a proxy (in the way I'm understanding it) will be useful
<obondarev> ok, so when speking about proxy the first question I get is what exact calls should be proxied, who issues that calls
<markmcclain> belmoreira: so the idea/purpose of the proxy is to ensure that the control plane elements can continue to function and that an operator could perform a rolling upgrade
<belmoreira> markmcclain: this means configuring the neutron agent on compute nodes, but still using nova network?
<markmcclain> right so initially the proxy would enable neutron API calls to be serviced from same source of truth
<markmcclain> and that using that one could run the both nova-net and neutron agent together
<belmoreira> markmcclain: ok, thanks
<markmcclain> dansmith's suggestion was to then create a special purpose nova management util
<obondarev> does it matter at this point which neutron agent is used?
<obondarev> is it a special nova-aware agent already?
<belmoreira> markmcclain: yes, so looking into the proposal the proxy is only the have a small downtime
<markmcclain> obondarev: so we have two choices either implement a special L2 or proceed with ML2 on linuxbridge
<markmcclain> I think that for easy we do ML2+bridge
<obondarev> markmcclain: got it
<markmcclain> that way the new nova-migrate command could be used to let the nova API know that the vifs are now managed by neutron and not nova
<obondarev> I guess neutron should also be notified of it
<markmcclain> obondarev: not sure we need to notify neutron
<markmcclain> the neutron db will still be proxied back to Nova
<obondarev> I mean currently ml2 + linuxbridge works slightly different than nova-net
<obondarev> what if user wants to update net config of an instance which was originally created with nova-net
<markmcclain> right but that's where the conversion step moves the tap device from the nova-net bridge onto ml2 managed bridge
<markmcclain> so during this process the support would be limited operations supported by nova-net
<obondarev> ok, so we still have this bridge switching
<obondarev> not sure how can this be done for tap devices of all instances at a time
<markmcclain> so basically we create a command that would cause nova-compute to migrate the instances it manages
<obondarev> is it similar to the approach I proposed in summer: https://review.openstack.org/#/c/101921/8
<markmcclain> obondarev: except for we're sticking with linuxbridge
<markmcclain> and maintain a single source of truth
<obondarev> got it
<markmcclain> the split brain migration we explored this summer
<markmcclain> made some operators uncomfortable
<obondarev> and this should not be a nova api extension but some nova-manage command, right?
<markmcclain> dan suggested making is a special management command
<markmcclain> that way we don't have to worry about all of the hoops necessary to update the API
<markmcclain> all we'd need is make the changes to compute and conductor
<obondarev> so, for the single source of truth, what is the contact point in nova for Neutron?
<markmcclain> but since these are internal interfaces lot less hassle
<markmcclain> initially I had considered proxying the to Nova REST API where possible
<markmcclain> it won't have great performance, but this is meant to be a transitional phase
<markmcclain> so that an operator does not need a long outage
<obondarev> I'm afraid rest api would not be enough
<obondarev> just trying to understand
<markmcclain> I think we'll have access to everything we need
<obondarev> let's say we have neutron running referencing nova as a source of truth
<obondarev> then a user creates a port 
<obondarev> what neutron should do is to allocate new fixed ip in nova first
<obondarev> not sure this can be done through nova rest api at the moment
<markmcclain> correct
<obondarev> I might miss something
<markmcclain> yeah the other alternative now that pluggable IPAM is on the horizon
<obondarev> pluggable IPAM becomes a must for nova-net to neutron migration I guess :)
--> marekd (~marek at skamander.octogan.net) has joined #novamigration
<markmcclain> yeah pluggable IPAM actually makes this a bit easier
<obondarev> anyway, once we have pluggable IPAM in neutron
<marios_> is that the only part of the api that we can't directly currently proxy to nova-compute?
<obondarev> we'll need to have nova-net driver for it in neutron, right?
<marios_> (thanks, sorry for noise, just trying to understand/follow, pls ignore)
<markmcclain> marios_: I believe so
<markmcclain> obondarev: yes or either a temporary monolithic plugin
<markmcclain> and then once all of the hypervisors are transitioned to being managed by neutron
<markmcclain> the api would be frozen and we would do a dump > translate > restore of the data from nova to neutron
<markmcclain> during this step we'd also need to switch the L3 elements
<markmcclain> and bring up any routers, DHCP servers, etc
<obondarev> I see, cool
<marios_> can we/do we already have this in a spec? I think it would be hugely helpful
<obondarev> marios_: no spec yet, I guess we'll need two specs at least
<obondarev> for nova and neutron
<markmcclain> correct we'll need two specs
<markmcclain> the nova team is expecting one from me for that side of it
<markmcclain> and I was thinking obondarev you could lead writing the neutron one
<obondarev> I can work on neutron spec then
<obondarev> yeah)
<marios_> obondarev: i'd be grateful if you added me as reviewer - will keep a lookout anyway
<obondarev> marios_: sure, I will
<jlibosva> I'll review too
<obondarev> so waiting for pluggable IPAM is kind of riscky for neutorn spec to wait for
<markmcclain> yeah that is a concern of mine
<obondarev> probably need to include both options there
<markmcclain> there is a spec in the review queue for it, but not sure that close enough to final form
<markmcclain> yeah I think options are good
<obondarev> IPAM and monolithic
<markmcclain> was also hoping by the end of the SLC sprint we'd have better idea of IPAM spec
<obondarev> you mean that one in December?
<markmcclain> yes
<markmcclain> it is something we can work on over IRC/email
<markmcclain> but there is so much in flight that the ipam is a bit held up on the API refactor stuff
<obondarev> sorry, you mean what?
<markmcclain> the migration spec is something we work on over IRC/email
<obondarev> oh, got it
<obondarev> right
<markmcclain> and I'm hoping the IPAM spec will start to converge in the next week or so
<obondarev> would be great
<markmcclain> the holiday here in the US will create a drag on velocity
<obondarev> :)
<markmcclain> but mestery and I are really wanting to get many of the items solidified before everyone gets distracted with end of the year stuff
<obondarev> would be cool to have speck landed in kilo-1
<obondarev> specks*
<markmcclain> agreed� if we don't I'll have an army of nova cores asking questions :)
<obondarev> haha)
<obondarev> anyone has more questions at this point?
<obondarev> ok, cool
<markmcclain> so there was the one item I wanted to circle back to
<obondarev> sure
<markmcclain> belmoreira: mentioned IPtables
<markmcclain> we'll have to migrate the rules when move the device to a new bridge
<markmcclain> otherwise we'll need to make ml2 aware of how nova writes the rules which might be messy
<obondarev> does migrate means clearing old ones and creating new?
<markmcclain> yeah we will need to clear the old ones
<markmcclain> but I think that should be accomplished with nova no longer thinks that it is managing that vif
<obondarev> so nova will clear rules, right?
<markmcclain> that's my working plan
<markmcclain> we'll be interuptting connections
<markmcclain> but that seems like the best approach
<obondarev> agree
<obondarev> seems we're running out of time..
-*- marios_ joins another call
<markmcclain> thanks everyone for joining the chat
<marios_> thanks for the invite oleg, will try tag along and help out with whatever bits it is useful
<belmoreira> we have different chains in nova and neutron
<belmoreira> can nova chains be removed only when all neutron ones are ready?
<markmcclain> belmoreira: yes.. I think that we can orchestrate it so that occurs
<obondarev> thanks everyone, let's continue on irc/ML, and we can have another meeting like this if needed
<jlibosva> thanks and see you on irc
<markmcclain> obondarev: thanks for organizing this
<belmoreira> thanks
<josecastroleon> thanks


More information about the OpenStack-dev mailing list