[openstack-dev] [Nova][Neutron] Nova-network to Neutron migration: issues with libvirt

Oleg Bondarev obondarev at mirantis.com
Wed Apr 30 14:30:42 UTC 2014


I've tried updating interface while running ssh session from guest to host
and it was dropped :(

*07:27:58.676570 IP 10.0.0.4.52556 > 172.18.76.80.22: Flags [P.], seq
44:88, ack 61, win 2563, options [nop,nop,TS val 4539607 ecr 24227108],
length 44*
*07:27:58.677161 IP 172.18.76.80.22 > 10.0.0.4.52556: Flags [P.], seq
61:121, ack 88, win 277, options [nop,nop,TS val 24227149 ecr 4539607],
length 60*
*07:27:58.677720 IP 10.0.0.4.52556 > 172.18.76.80.22: Flags [.], ack 121,
win 2563, options [nop,nop,TS val 4539608 ecr 24227149], length 0*
*07:27:59.087582 IP 10.0.0.4.52556 > 172.18.76.80.22: Flags [P.], seq
88:132, ack 121, win 2563, options [nop,nop,TS val 4539710 ecr 24227149],
length 44*
*07:27:59.088140 IP 172.18.76.80.22 > 10.0.0.4.52556: Flags [P.], seq
121:181, ack 132, win 277, options [nop,nop,TS val 24227251 ecr 4539710],
length 60*
*07:27:59.088487 IP 10.0.0.4.52556 > 172.18.76.80.22: Flags [.], ack 181,
win 2563, options [nop,nop,TS val 4539710 ecr 24227251], length 0*
*[vm interface updated..]*
*07:28:17.157594 IP 10.0.0.4.52556 > 172.18.76.80.22: Flags [P.], seq
132:176, ack 181, win 2563, options [nop,nop,TS val 4544228 ecr 24227251],
length 44*
*07:28:17.321060 IP 10.0.0.4.52556 > 172.18.76.80.22: Flags [P.], seq
176:220, ack 181, win 2563, options [nop,nop,TS val 4544268 ecr 24227251],
length 44*
*07:28:17.361835 IP 10.0.0.4.52556 > 172.18.76.80.22: Flags [P.], seq
132:176, ack 181, win 2563, options [nop,nop,TS val 4544279 ecr 24227251],
length 44*
*07:28:17.769935 IP 10.0.0.4.52556 > 172.18.76.80.22: Flags [P.], seq
132:176, ack 181, win 2563, options [nop,nop,TS val 4544381 ecr 24227251],
length 44*
*07:28:18.585887 IP 10.0.0.4.52556 > 172.18.76.80.22: Flags [P.], seq
132:176, ack 181, win 2563, options [nop,nop,TS val 4544585 ecr 24227251],
length 44*
*07:28:20.221797 IP 10.0.0.4.52556 > 172.18.76.80.22: Flags [P.], seq
132:176, ack 181, win 2563, options [nop,nop,TS val 4544994 ecr 24227251],
length 44*
*07:28:23.493540 IP 10.0.0.4.52556 > 172.18.76.80.22: Flags [P.], seq
132:176, ack 181, win 2563, options [nop,nop,TS val 4545812 ecr 24227251],
length 44*
*07:28:30.037927 IP 10.0.0.4.52556 > 172.18.76.80.22: Flags [P.], seq
132:176, ack 181, win 2563, options [nop,nop,TS val 4547448 ecr 24227251],
length 44*
*07:28:35.045733 ARP, Request who-has 10.0.0.1 tell 10.0.0.4, length 28*
*07:28:36.045388 ARP, Request who-has 10.0.0.1 tell 10.0.0.4, length 28*
*07:28:37.045900 ARP, Request who-has 10.0.0.1 tell 10.0.0.4, length 28*
*07:28:43.063118 IP 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP, Request
from fa:16:3e:ec:eb:a4, length 280*
*07:28:43.084384 IP 10.0.0.3.67 > 10.0.0.4.68: BOOTP/DHCP, Reply, length
323*
*07:28:43.085038 ARP, Request who-has 10.0.0.3 tell 10.0.0.4, length 28*
*07:28:43.099463 ARP, Reply 10.0.0.3 is-at fa:16:3e:79:9b:9c, length 28*
*07:28:43.099841 IP 10.0.0.4 > 10.0.0.3 <http://10.0.0.3>: ICMP 10.0.0.4
udp port 68 unreachable, length 359*
*07:28:43.125379 ARP, Request who-has 10.0.0.1 tell 10.0.0.4, length 28*
*07:28:43.125626 ARP, Reply 10.0.0.1 is-at fa:16:3e:61:28:fa, length 28*
*07:28:43.125907 IP 10.0.0.4.52556 > 172.18.76.80.22: Flags [P.], seq
132:176, ack 181, win 2563, options [nop,nop,TS val 4550720 ecr 24227251],
length 44*
*07:28:43.132650 IP 172.18.76.80.22 > 10.0.0.4.52556: Flags [R], seq
369316248, win 0, length 0*
*07:28:48.148853 ARP, Request who-has 10.0.0.4 tell 10.0.0.1, length 28*
*07:28:48.149377 ARP, Reply 10.0.0.4 is-at fa:16:3e:ec:eb:a4, length 28*


On Wed, Apr 30, 2014 at 5:50 PM, Kyle Mestery <mestery at noironetworks.com>wrote:

> Agreed, ping was a good first tool to verify downtime, but trying with
> something using TCP at this point would be useful as well.
>
> On Wed, Apr 30, 2014 at 8:39 AM, Eugene Nikanorov
> <enikanorov at mirantis.com> wrote:
> > I think it's better to test with some tcp connection (ssh session?)
> rather
> > then with ping.
> >
> > Eugene.
> >
> >
> > On Wed, Apr 30, 2014 at 5:28 PM, Oleg Bondarev <obondarev at mirantis.com>
> > wrote:
> >>
> >> So by running ping while instance interface update we can see ~10-20 sec
> >> of
> >> connectivity downtime. Here is a tcp capture during update (pinging ext
> >> net gateway):
> >>
> >> 05:58:41.020791 IP 10.0.0.4 > 172.24.4.1: ICMP echo request, id 29954,
> seq
> >> 10, length 64
> >> 05:58:41.020866 IP 172.24.4.1 > 10.0.0.4: ICMP echo reply, id 29954,
> seq
> >> 10, length 64
> >> 05:58:41.885381 STP 802.1s, Rapid STP, CIST Flags [Learn, Forward,
> >> Agreement]
> >> 05:58:42.022785 IP 10.0.0.4 > 172.24.4.1: ICMP echo request, id 29954,
> seq
> >> 11, length 64
> >> 05:58:42.022832 IP 172.24.4.1 > 10.0.0.4: ICMP echo reply, id 29954,
> seq
> >> 11, length 64
> >> [vm interface updated..]
> >> 05:58:43.023310 IP 10.0.0.4 > 172.24.4.1: ICMP echo request, id 29954,
> seq
> >> 12, length 64
> >> 05:58:44.024042 IP 10.0.0.4 > 172.24.4.1: ICMP echo request, id 29954,
> seq
> >> 13, length 64
> >> 05:58:45.025760 IP 10.0.0.4 > 172.24.4.1: ICMP echo request, id 29954,
> seq
> >> 14, length 64
> >> 05:58:46.026260 IP 10.0.0.4 > 172.24.4.1: ICMP echo request, id 29954,
> seq
> >> 15, length 64
> >> 05:58:47.027813 IP 10.0.0.4 > 172.24.4.1: ICMP echo request, id 29954,
> seq
> >> 16, length 64
> >> 05:58:48.028229 IP 10.0.0.4 > 172.24.4.1: ICMP echo request, id 29954,
> seq
> >> 17, length 64
> >> 05:58:49.029881 IP 10.0.0.4 > 172.24.4.1: ICMP echo request, id 29954,
> seq
> >> 18, length 64
> >> 05:58:50.029952 IP 10.0.0.4 > 172.24.4.1: ICMP echo request, id 29954,
> seq
> >> 19, length 64
> >> 05:58:51.031380 IP 10.0.0.4 > 172.24.4.1: ICMP echo request, id 29954,
> seq
> >> 20, length 64
> >> 05:58:52.032012 IP 10.0.0.4 > 172.24.4.1: ICMP echo request, id 29954,
> seq
> >> 21, length 64
> >> 05:58:53.033456 IP 10.0.0.4 > 172.24.4.1: ICMP echo request, id 29954,
> seq
> >> 22, length 64
> >> 05:58:54.034061 IP 10.0.0.4 > 172.24.4.1: ICMP echo request, id 29954,
> seq
> >> 23, length 64
> >> 05:58:55.035170 IP 10.0.0.4 > 172.24.4.1: ICMP echo request, id 29954,
> seq
> >> 24, length 64
> >> 05:58:56.035988 IP 10.0.0.4 > 172.24.4.1: ICMP echo request, id 29954,
> seq
> >> 25, length 64
> >> 05:58:57.037285 IP 10.0.0.4 > 172.24.4.1: ICMP echo request, id 29954,
> seq
> >> 26, length 64
> >> 05:58:57.045691 ARP, Request who-has 10.0.0.1 tell 10.0.0.4, length 28
> >> 05:58:58.038245 IP 10.0.0.4 > 172.24.4.1: ICMP echo request, id 29954,
> seq
> >> 27, length 64
> >> 05:58:58.045496 ARP, Request who-has 10.0.0.1 tell 10.0.0.4, length 28
> >> 05:58:59.040143 IP 10.0.0.4 > 172.24.4.1: ICMP echo request, id 29954,
> seq
> >> 28, length 64
> >> 05:58:59.045609 ARP, Request who-has 10.0.0.1 tell 10.0.0.4, length 28
> >> 05:59:00.040789 IP 10.0.0.4 > 172.24.4.1: ICMP echo request, id 29954,
> seq
> >> 29, length 64
> >> 05:59:01.042333 ARP, Request who-has 10.0.0.1 tell 10.0.0.4, length 28
> >> 05:59:01.042618 ARP, Reply 10.0.0.1 is-at fa:16:3e:61:28:fa (oui
> Unknown),
> >> length 28
> >> 05:59:01.043471 IP 10.0.0.4 > 172.24.4.1: ICMP echo request, id 29954,
> seq
> >> 30, length 64
> >> 05:59:01.063176 IP 172.24.4.1 > 10.0.0.4: ICMP echo reply, id 29954,
> seq
> >> 30, length 64
> >> 05:59:02.042699 IP 10.0.0.4 > 172.24.4.1: ICMP echo request, id 29954,
> seq
> >> 31, length 64
> >> 05:59:02.042840 IP 172.24.4.1 > 10.0.0.4: ICMP echo reply, id 29954,
> seq
> >> 31, length 64
> >>
> >> However this connectivity downtime can be significally reduced by
> >> restarting
> >> network service on the instance right after interface update.
> >>
> >>
> >> On Mon, Apr 28, 2014 at 6:29 PM, Kyle Mestery <
> mestery at noironetworks.com>
> >> wrote:
> >>>
> >>> On Mon, Apr 28, 2014 at 9:19 AM, Oleg Bondarev <obondarev at mirantis.com
> >
> >>> wrote:
> >>> > On Mon, Apr 28, 2014 at 6:01 PM, Kyle Mestery
> >>> > <mestery at noironetworks.com>
> >>> > wrote:
> >>> >>
> >>> >> On Mon, Apr 28, 2014 at 8:54 AM, Oleg Bondarev
> >>> >> <obondarev at mirantis.com>
> >>> >> wrote:
> >>> >> > Yeah, I also saw in docs that update-device is supported since
> 0.8.0
> >>> >> > version,
> >>> >> > not sure why it didn't work in my setup.
> >>> >> > I installed latest libvirt 1.2.3 and now update-device works just
> >>> >> > fine
> >>> >> > and I
> >>> >> > am able
> >>> >> > to move instance tap device from one bridge to another with no
> >>> >> > downtime
> >>> >> > and
> >>> >> > no reboot!
> >>> >> > I'll try to investigate why it didn't work on 0.9.8 and which is
> the
> >>> >> > minimal
> >>> >> > libvirt version for this.
> >>> >> >
> >>> >> Wow, cool! This is really good news. Thanks for driving this! By
> >>> >> chance did you notice if there was a drop in connectivity at all, or
> >>> >> if the guest detected the move at all?
> >>> >
> >>> >
> >>> > Didn't check it yet. What in your opinion would be the best way of
> >>> > testing
> >>> > this?
> >>> >
> >>> The simplest way would to have a ping running when you run
> >>> "update-device" and see if any packets are dropped. We can do more
> >>> thorough testing after that, but that would give us a good
> >>> approximation of connectivity while swapping the underlying device.
> >>>
> >>> >> Kyle
> >>> >>
> >>> >> > Thanks,
> >>> >> > Oleg
> >>> >> >
> >>> >> >
> >>> >> > On Sat, Apr 26, 2014 at 5:46 AM, Kyle Mestery
> >>> >> > <mestery at noironetworks.com>
> >>> >> > wrote:
> >>> >> >>
> >>> >> >> According to this page [1], "update-device" is supported from
> >>> >> >> libvirt
> >>> >> >> 0.8.0 onwards. So in theory, this should be working with your
> 0.9.8
> >>> >> >> version you have. If you continue to hit issues here Oleg, I'd
> >>> >> >> suggest
> >>> >> >> sending an email to the libvirt mailing list with the specifics
> of
> >>> >> >> the
> >>> >> >> problem. I've found in the past there are lots of very helpful on
> >>> >> >> that
> >>> >> >> mailing list.
> >>> >> >>
> >>> >> >> Thanks,
> >>> >> >> Kyle
> >>> >> >>
> >>> >> >> [1]
> >>> >> >>
> >>> >> >>
> http://libvirt.org/sources/virshcmdref/html-single/#sect-update-device
> >>> >> >>
> >>> >> >> On Thu, Apr 24, 2014 at 7:42 AM, Oleg Bondarev
> >>> >> >> <obondarev at mirantis.com>
> >>> >> >> wrote:
> >>> >> >> > So here is the etherpad for the migration discussion:
> >>> >> >> > https://etherpad.openstack.org/p/novanet-neutron-migration
> >>> >> >> > I've also filed a design session on this:
> >>> >> >> > http://summit.openstack.org/cfp/details/374
> >>> >> >> >
> >>> >> >> > Currently I'm still struggling with instance vNic update,
> trying
> >>> >> >> > to
> >>> >> >> > move
> >>> >> >> > it
> >>> >> >> > from one bridge to another.
> >>> >> >> > Tried the following on ubuntu 12.04 with libvirt 0.9.8:
> >>> >> >> >
> >>> >> >> >
> >>> >> >> >
> >>> >> >> >
> https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Virtualization_Administration_Guide/sect-dynamic-vNIC.html
> >>> >> >> > virsh update-device shows success but nothing actually changes
> in
> >>> >> >> > the
> >>> >> >> > instance interface config.
> >>> >> >> > Going to try this with later libvirt version.
> >>> >> >> >
> >>> >> >> > Thanks,
> >>> >> >> > Oleg
> >>> >> >> >
> >>> >> >> >
> >>> >> >> >
> >>> >> >> > On Wed, Apr 23, 2014 at 3:24 PM, Rossella Sblendido
> >>> >> >> > <rsblendido at suse.com>
> >>> >> >> > wrote:
> >>> >> >> >>
> >>> >> >> >>
> >>> >> >> >> Very interesting topic!
> >>> >> >> >> +1 Salvatore
> >>> >> >> >>
> >>> >> >> >> It would be nice to have an etherpad to share the information
> >>> >> >> >> and
> >>> >> >> >> organize
> >>> >> >> >> a plan. This way it would be easier for interested people  to
> >>> >> >> >> join.
> >>> >> >> >>
> >>> >> >> >> Rossella
> >>> >> >> >>
> >>> >> >> >>
> >>> >> >> >> On 04/23/2014 12:57 AM, Salvatore Orlando wrote:
> >>> >> >> >>
> >>> >> >> >> It's great to see that there is activity on the launchpad
> >>> >> >> >> blueprint
> >>> >> >> >> as
> >>> >> >> >> well.
> >>> >> >> >> From what I heard Oleg should have already translated the
> >>> >> >> >> various
> >>> >> >> >> discussion into a list of functional requirements (or
> something
> >>> >> >> >> like
> >>> >> >> >> that).
> >>> >> >> >>
> >>> >> >> >> If that is correct, it might be a good idea to share them with
> >>> >> >> >> relevant
> >>> >> >> >> stakeholders (operators and developers), define an actionable
> >>> >> >> >> plan
> >>> >> >> >> for
> >>> >> >> >> Juno,
> >>> >> >> >> and then distribute tasks.
> >>> >> >> >> It would be a shame if it turns out several contributors are
> >>> >> >> >> working
> >>> >> >> >> on
> >>> >> >> >> this topic independently.
> >>> >> >> >>
> >>> >> >> >> Salvatore
> >>> >> >> >>
> >>> >> >> >>
> >>> >> >> >> On 22 April 2014 16:27, Jesse Pretorius
> >>> >> >> >> <jesse.pretorius at gmail.com>
> >>> >> >> >> wrote:
> >>> >> >> >>>
> >>> >> >> >>> On 22 April 2014 14:58, Salvatore Orlando <
> sorlando at nicira.com>
> >>> >> >> >>> wrote:
> >>> >> >> >>>>
> >>> >> >> >>>> From previous requirements discussions,
> >>> >> >> >>>
> >>> >> >> >>>
> >>> >> >> >>> There's a track record of discussions on the whiteboard here:
> >>> >> >> >>>
> >>> >> >> >>>
> >>> >> >> >>>
> https://blueprints.launchpad.net/neutron/+spec/nova-to-quantum-upgrade
> >>> >> >> >>>
> >>> >> >> >>> _______________________________________________
> >>> >> >> >>> OpenStack-dev mailing list
> >>> >> >> >>> OpenStack-dev at lists.openstack.org
> >>> >> >> >>>
> >>> >> >> >>>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>> >> >> >>>
> >>> >> >> >>
> >>> >> >> >>
> >>> >> >> >>
> >>> >> >> >> _______________________________________________
> >>> >> >> >> OpenStack-dev mailing list
> >>> >> >> >> OpenStack-dev at lists.openstack.org
> >>> >> >> >>
> >>> >> >> >>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>> >> >> >>
> >>> >> >> >>
> >>> >> >> >>
> >>> >> >> >> _______________________________________________
> >>> >> >> >> OpenStack-dev mailing list
> >>> >> >> >> OpenStack-dev at lists.openstack.org
> >>> >> >> >>
> >>> >> >> >>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>> >> >> >>
> >>> >> >> >
> >>> >> >> >
> >>> >> >> > _______________________________________________
> >>> >> >> > OpenStack-dev mailing list
> >>> >> >> > OpenStack-dev at lists.openstack.org
> >>> >> >> >
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>> >> >> >
> >>> >> >>
> >>> >> >> _______________________________________________
> >>> >> >> OpenStack-dev mailing list
> >>> >> >> OpenStack-dev at lists.openstack.org
> >>> >> >>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>> >> >
> >>> >> >
> >>> >> >
> >>> >> > _______________________________________________
> >>> >> > OpenStack-dev mailing list
> >>> >> > OpenStack-dev at lists.openstack.org
> >>> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>> >> >
> >>> >>
> >>> >> _______________________________________________
> >>> >> OpenStack-dev mailing list
> >>> >> OpenStack-dev at lists.openstack.org
> >>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>> >
> >>> >
> >>> >
> >>> > _______________________________________________
> >>> > OpenStack-dev mailing list
> >>> > OpenStack-dev at lists.openstack.org
> >>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>> >
> >>>
> >>> _______________________________________________
> >>> OpenStack-dev mailing list
> >>> OpenStack-dev at lists.openstack.org
> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >>
> >>
> >> _______________________________________________
> >> OpenStack-dev mailing list
> >> OpenStack-dev at lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >
> >
> > _______________________________________________
> > OpenStack-dev mailing list
> > OpenStack-dev at lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140430/b1d6011c/attachment.html>


More information about the OpenStack-dev mailing list