[openstack-dev] [Nova][Neutron] Nova-network to Neutron migration: issues with libvirt

Eugene Nikanorov enikanorov at mirantis.com
Wed Apr 30 13:39:16 UTC 2014


I think it's better to test with some tcp connection (ssh session?) rather
then with ping.

Eugene.


On Wed, Apr 30, 2014 at 5:28 PM, Oleg Bondarev <obondarev at mirantis.com>wrote:

> So by running ping while instance interface update we can see ~10-20 sec of
> connectivity downtime. Here is a tcp capture during update (pinging ext
> net gateway):
>
> *05:58:41.020791 IP 10.0.0.4 > 172.24.4.1 <http://172.24.4.1>: ICMP echo
> request, id 29954, seq 10, length 64*
> *05:58:41.020866 IP 172.24.4.1 > 10.0.0.4 <http://10.0.0.4>: ICMP echo
> reply, id 29954, seq 10, length 64*
> *05:58:41.885381 STP 802.1s, Rapid STP, CIST Flags [Learn, Forward,
> Agreement]*
> *05:58:42.022785 IP 10.0.0.4 > 172.24.4.1 <http://172.24.4.1>: ICMP echo
> request, id 29954, seq 11, length 64*
> *05:58:42.022832 IP 172.24.4.1 > 10.0.0.4 <http://10.0.0.4>: ICMP echo
> reply, id 29954, seq 11, length 64*
> *[vm interface updated..]*
> *05:58:43.023310 IP 10.0.0.4 > 172.24.4.1 <http://172.24.4.1>: ICMP echo
> request, id 29954, seq 12, length 64*
> *05:58:44.024042 IP 10.0.0.4 > 172.24.4.1 <http://172.24.4.1>: ICMP echo
> request, id 29954, seq 13, length 64*
> *05:58:45.025760 IP 10.0.0.4 > 172.24.4.1 <http://172.24.4.1>: ICMP echo
> request, id 29954, seq 14, length 64*
> *05:58:46.026260 IP 10.0.0.4 > 172.24.4.1 <http://172.24.4.1>: ICMP echo
> request, id 29954, seq 15, length 64*
> *05:58:47.027813 IP 10.0.0.4 > 172.24.4.1 <http://172.24.4.1>: ICMP echo
> request, id 29954, seq 16, length 64*
> *05:58:48.028229 IP 10.0.0.4 > 172.24.4.1 <http://172.24.4.1>: ICMP echo
> request, id 29954, seq 17, length 64*
> *05:58:49.029881 IP 10.0.0.4 > 172.24.4.1 <http://172.24.4.1>: ICMP echo
> request, id 29954, seq 18, length 64*
> *05:58:50.029952 IP 10.0.0.4 > 172.24.4.1 <http://172.24.4.1>: ICMP echo
> request, id 29954, seq 19, length 64*
> *05:58:51.031380 IP 10.0.0.4 > 172.24.4.1 <http://172.24.4.1>: ICMP echo
> request, id 29954, seq 20, length 64*
> *05:58:52.032012 IP 10.0.0.4 > 172.24.4.1 <http://172.24.4.1>: ICMP echo
> request, id 29954, seq 21, length 64*
> *05:58:53.033456 IP 10.0.0.4 > 172.24.4.1 <http://172.24.4.1>: ICMP echo
> request, id 29954, seq 22, length 64*
> *05:58:54.034061 IP 10.0.0.4 > 172.24.4.1 <http://172.24.4.1>: ICMP echo
> request, id 29954, seq 23, length 64*
> *05:58:55.035170 IP 10.0.0.4 > 172.24.4.1 <http://172.24.4.1>: ICMP echo
> request, id 29954, seq 24, length 64*
> *05:58:56.035988 IP 10.0.0.4 > 172.24.4.1 <http://172.24.4.1>: ICMP echo
> request, id 29954, seq 25, length 64*
> *05:58:57.037285 IP 10.0.0.4 > 172.24.4.1 <http://172.24.4.1>: ICMP echo
> request, id 29954, seq 26, length 64*
> *05:58:57.045691 ARP, Request who-has 10.0.0.1 tell 10.0.0.4, length 28*
> *05:58:58.038245 IP 10.0.0.4 > 172.24.4.1 <http://172.24.4.1>: ICMP echo
> request, id 29954, seq 27, length 64*
> *05:58:58.045496 ARP, Request who-has 10.0.0.1 tell 10.0.0.4, length 28*
> *05:58:59.040143 IP 10.0.0.4 > 172.24.4.1 <http://172.24.4.1>: ICMP echo
> request, id 29954, seq 28, length 64*
> *05:58:59.045609 ARP, Request who-has 10.0.0.1 tell 10.0.0.4, length 28*
> *05:59:00.040789 IP 10.0.0.4 > 172.24.4.1 <http://172.24.4.1>: ICMP echo
> request, id 29954, seq 29, length 64*
> *05:59:01.042333 ARP, Request who-has 10.0.0.1 tell 10.0.0.4, length 28*
> *05:59:01.042618 ARP, Reply 10.0.0.1 is-at fa:16:3e:61:28:fa (oui
> Unknown), length 28*
> *05:59:01.043471 IP 10.0.0.4 > 172.24.4.1 <http://172.24.4.1>: ICMP echo
> request, id 29954, seq 30, length 64*
> *05:59:01.063176 IP 172.24.4.1 > 10.0.0.4 <http://10.0.0.4>: ICMP echo
> reply, id 29954, seq 30, length 64*
> *05:59:02.042699 IP 10.0.0.4 > 172.24.4.1 <http://172.24.4.1>: ICMP echo
> request, id 29954, seq 31, length 64*
> *05:59:02.042840 IP 172.24.4.1 > 10.0.0.4 <http://10.0.0.4>: ICMP echo
> reply, id 29954, seq 31, length 64*
>
> However this connectivity downtime can be significally reduced by
> restarting
> network service on the instance right after interface update.
>
>
> On Mon, Apr 28, 2014 at 6:29 PM, Kyle Mestery <mestery at noironetworks.com>wrote:
>
>> On Mon, Apr 28, 2014 at 9:19 AM, Oleg Bondarev <obondarev at mirantis.com>
>> wrote:
>> > On Mon, Apr 28, 2014 at 6:01 PM, Kyle Mestery <
>> mestery at noironetworks.com>
>> > wrote:
>> >>
>> >> On Mon, Apr 28, 2014 at 8:54 AM, Oleg Bondarev <obondarev at mirantis.com
>> >
>> >> wrote:
>> >> > Yeah, I also saw in docs that update-device is supported since 0.8.0
>> >> > version,
>> >> > not sure why it didn't work in my setup.
>> >> > I installed latest libvirt 1.2.3 and now update-device works just
>> fine
>> >> > and I
>> >> > am able
>> >> > to move instance tap device from one bridge to another with no
>> downtime
>> >> > and
>> >> > no reboot!
>> >> > I'll try to investigate why it didn't work on 0.9.8 and which is the
>> >> > minimal
>> >> > libvirt version for this.
>> >> >
>> >> Wow, cool! This is really good news. Thanks for driving this! By
>> >> chance did you notice if there was a drop in connectivity at all, or
>> >> if the guest detected the move at all?
>> >
>> >
>> > Didn't check it yet. What in your opinion would be the best way of
>> testing
>> > this?
>> >
>> The simplest way would to have a ping running when you run
>> "update-device" and see if any packets are dropped. We can do more
>> thorough testing after that, but that would give us a good
>> approximation of connectivity while swapping the underlying device.
>>
>> >> Kyle
>> >>
>> >> > Thanks,
>> >> > Oleg
>> >> >
>> >> >
>> >> > On Sat, Apr 26, 2014 at 5:46 AM, Kyle Mestery
>> >> > <mestery at noironetworks.com>
>> >> > wrote:
>> >> >>
>> >> >> According to this page [1], "update-device" is supported from
>> libvirt
>> >> >> 0.8.0 onwards. So in theory, this should be working with your 0.9.8
>> >> >> version you have. If you continue to hit issues here Oleg, I'd
>> suggest
>> >> >> sending an email to the libvirt mailing list with the specifics of
>> the
>> >> >> problem. I've found in the past there are lots of very helpful on
>> that
>> >> >> mailing list.
>> >> >>
>> >> >> Thanks,
>> >> >> Kyle
>> >> >>
>> >> >> [1]
>> >> >>
>> http://libvirt.org/sources/virshcmdref/html-single/#sect-update-device
>> >> >>
>> >> >> On Thu, Apr 24, 2014 at 7:42 AM, Oleg Bondarev <
>> obondarev at mirantis.com>
>> >> >> wrote:
>> >> >> > So here is the etherpad for the migration discussion:
>> >> >> > https://etherpad.openstack.org/p/novanet-neutron-migration
>> >> >> > I've also filed a design session on this:
>> >> >> > http://summit.openstack.org/cfp/details/374
>> >> >> >
>> >> >> > Currently I'm still struggling with instance vNic update, trying
>> to
>> >> >> > move
>> >> >> > it
>> >> >> > from one bridge to another.
>> >> >> > Tried the following on ubuntu 12.04 with libvirt 0.9.8:
>> >> >> >
>> >> >> >
>> >> >> >
>> https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Virtualization_Administration_Guide/sect-dynamic-vNIC.html
>> >> >> > virsh update-device shows success but nothing actually changes in
>> the
>> >> >> > instance interface config.
>> >> >> > Going to try this with later libvirt version.
>> >> >> >
>> >> >> > Thanks,
>> >> >> > Oleg
>> >> >> >
>> >> >> >
>> >> >> >
>> >> >> > On Wed, Apr 23, 2014 at 3:24 PM, Rossella Sblendido
>> >> >> > <rsblendido at suse.com>
>> >> >> > wrote:
>> >> >> >>
>> >> >> >>
>> >> >> >> Very interesting topic!
>> >> >> >> +1 Salvatore
>> >> >> >>
>> >> >> >> It would be nice to have an etherpad to share the information and
>> >> >> >> organize
>> >> >> >> a plan. This way it would be easier for interested people  to
>> join.
>> >> >> >>
>> >> >> >> Rossella
>> >> >> >>
>> >> >> >>
>> >> >> >> On 04/23/2014 12:57 AM, Salvatore Orlando wrote:
>> >> >> >>
>> >> >> >> It's great to see that there is activity on the launchpad
>> blueprint
>> >> >> >> as
>> >> >> >> well.
>> >> >> >> From what I heard Oleg should have already translated the various
>> >> >> >> discussion into a list of functional requirements (or something
>> like
>> >> >> >> that).
>> >> >> >>
>> >> >> >> If that is correct, it might be a good idea to share them with
>> >> >> >> relevant
>> >> >> >> stakeholders (operators and developers), define an actionable
>> plan
>> >> >> >> for
>> >> >> >> Juno,
>> >> >> >> and then distribute tasks.
>> >> >> >> It would be a shame if it turns out several contributors are
>> working
>> >> >> >> on
>> >> >> >> this topic independently.
>> >> >> >>
>> >> >> >> Salvatore
>> >> >> >>
>> >> >> >>
>> >> >> >> On 22 April 2014 16:27, Jesse Pretorius <
>> jesse.pretorius at gmail.com>
>> >> >> >> wrote:
>> >> >> >>>
>> >> >> >>> On 22 April 2014 14:58, Salvatore Orlando <sorlando at nicira.com>
>> >> >> >>> wrote:
>> >> >> >>>>
>> >> >> >>>> From previous requirements discussions,
>> >> >> >>>
>> >> >> >>>
>> >> >> >>> There's a track record of discussions on the whiteboard here:
>> >> >> >>>
>> >> >> >>>
>> https://blueprints.launchpad.net/neutron/+spec/nova-to-quantum-upgrade
>> >> >> >>>
>> >> >> >>> _______________________________________________
>> >> >> >>> OpenStack-dev mailing list
>> >> >> >>> OpenStack-dev at lists.openstack.org
>> >> >> >>>
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >> >> >>>
>> >> >> >>
>> >> >> >>
>> >> >> >>
>> >> >> >> _______________________________________________
>> >> >> >> OpenStack-dev mailing list
>> >> >> >> OpenStack-dev at lists.openstack.org
>> >> >> >>
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >> >> >>
>> >> >> >>
>> >> >> >>
>> >> >> >> _______________________________________________
>> >> >> >> OpenStack-dev mailing list
>> >> >> >> OpenStack-dev at lists.openstack.org
>> >> >> >>
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >> >> >>
>> >> >> >
>> >> >> >
>> >> >> > _______________________________________________
>> >> >> > OpenStack-dev mailing list
>> >> >> > OpenStack-dev at lists.openstack.org
>> >> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >> >> >
>> >> >>
>> >> >> _______________________________________________
>> >> >> OpenStack-dev mailing list
>> >> >> OpenStack-dev at lists.openstack.org
>> >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >> >
>> >> >
>> >> >
>> >> > _______________________________________________
>> >> > OpenStack-dev mailing list
>> >> > OpenStack-dev at lists.openstack.org
>> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >> >
>> >>
>> >> _______________________________________________
>> >> OpenStack-dev mailing list
>> >> OpenStack-dev at lists.openstack.org
>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>> >
>> >
>> > _______________________________________________
>> > OpenStack-dev mailing list
>> > OpenStack-dev at lists.openstack.org
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140430/3367d9b7/attachment.html>


More information about the OpenStack-dev mailing list