<div dir="ltr"><div id="gt-res-content" class=""><div dir="ltr" style="zoom:1"><span id="result_box" class="" lang="en"><span class="">Can anyone</span> <span class="">explain the following</span> <span class="">OpenStack</span> <span class="">in</span> <span class="">domain, group, user</span> <span class="">is the relationship</span> <span class="">between</span> <span class="">what。</span></span></div>
</div></div><div class="gmail_extra"><br><br><div class="gmail_quote">On Tue, Sep 3, 2013 at 4:18 PM, <span dir="ltr"><<a href="mailto:openstack-operators-request@lists.openstack.org" target="_blank">openstack-operators-request@lists.openstack.org</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Send OpenStack-operators mailing list submissions to<br>
<a href="mailto:openstack-operators@lists.openstack.org">openstack-operators@lists.openstack.org</a><br>
<br>
To subscribe or unsubscribe via the World Wide Web, visit<br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators</a><br>
<br>
or, via email, send a message with subject or body 'help' to<br>
<a href="mailto:openstack-operators-request@lists.openstack.org">openstack-operators-request@lists.openstack.org</a><br>
<br>
You can reach the person managing the list at<br>
<a href="mailto:openstack-operators-owner@lists.openstack.org">openstack-operators-owner@lists.openstack.org</a><br>
<br>
When replying, please edit your Subject line so it is more specific<br>
than "Re: Contents of OpenStack-operators digest..."<br>
<br>
<br>
Today's Topics:<br>
<br>
1. Re: OpenStack-operators Digest, Vol 35, Issue 1 (yungho)<br>
2. Re: OpenStack-operators Digest, Vol 35, Issue 1 (Lorin Hochstein)<br>
3. Re: Running DHCP agent in HA (grizzly) (Simon Pasquier)<br>
<br>
<br>
----------------------------------------------------------------------<br>
<br>
Message: 1<br>
Date: Tue, 3 Sep 2013 10:51:57 +0800<br>
From: yungho <<a href="mailto:yungho5054@gmail.com">yungho5054@gmail.com</a>><br>
To: <a href="mailto:openstack-operators@lists.openstack.org">openstack-operators@lists.openstack.org</a><br>
Subject: Re: [Openstack-operators] OpenStack-operators Digest, Vol 35,<br>
Issue 1<br>
Message-ID:<br>
<<a href="mailto:CAHeoptThFLv79dkKHnRdV3bizxEHn5YqW7U6DxPUaND%2Bzvjsog@mail.gmail.com">CAHeoptThFLv79dkKHnRdV3bizxEHn5YqW7U6DxPUaND+zvjsog@mail.gmail.com</a>><br>
Content-Type: text/plain; charset="iso-8859-1"<br>
<br>
Hello,<br>
I CentOs6.4 environment to deploy OpenStack G version, Controller<br>
Node, Network Node, Compute Node. Networks using Qauntum, but is still on<br>
the Quantum network model do not know, and want to know under what<br>
circumstances the use of gre, under what circumstances the vlan mode.<br>
<br>
<br>
On Tue, Sep 3, 2013 at 8:44 AM, <<br>
<a href="mailto:openstack-operators-request@lists.openstack.org">openstack-operators-request@lists.openstack.org</a>> wrote:<br>
<br>
> Send OpenStack-operators mailing list submissions to<br>
> <a href="mailto:openstack-operators@lists.openstack.org">openstack-operators@lists.openstack.org</a><br>
><br>
> To subscribe or unsubscribe via the World Wide Web, visit<br>
><br>
> <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators</a><br>
><br>
> or, via email, send a message with subject or body 'help' to<br>
> <a href="mailto:openstack-operators-request@lists.openstack.org">openstack-operators-request@lists.openstack.org</a><br>
><br>
> You can reach the person managing the list at<br>
> <a href="mailto:openstack-operators-owner@lists.openstack.org">openstack-operators-owner@lists.openstack.org</a><br>
><br>
> When replying, please edit your Subject line so it is more specific<br>
> than "Re: Contents of OpenStack-operators digest..."<br>
><br>
><br>
> Today's Topics:<br>
><br>
> 1. Quantum Security Groups not working - iptables rules are not<br>
> Evaluated (Sebastian Porombka)<br>
> 2. Re: Quantum Security Groups not working - iptables rules are<br>
> not Evaluated (Darragh O'Reilly)<br>
> 3. Running DHCP agent in HA (grizzly) (Robert van Leeuwen)<br>
> 4. Re: Quantum Security Groups not working - iptables rules are<br>
> not Evaluated (Lorin Hochstein)<br>
> 5. Re: Running DHCP agent in HA (grizzly) (Simon Pasquier)<br>
> 6. Re: Quantum Security Groups not working - iptables rules are<br>
> not Evaluated (Darragh O'Reilly)<br>
> 7. Migrating instances in grizzly (Juan Jos? Pavlik Salles)<br>
> 8. Re: Quantum Security Groups not working - iptables rules are<br>
> not Evaluated (Sebastian Porombka)<br>
> 9. Re: Migrating instances in grizzly (Juan Jos? Pavlik Salles)<br>
><br>
><br>
> ----------------------------------------------------------------------<br>
><br>
> Message: 1<br>
> Date: Mon, 2 Sep 2013 13:48:08 +0000<br>
> From: Sebastian Porombka <<a href="mailto:porombka@uni-paderborn.de">porombka@uni-paderborn.de</a>><br>
> To: "<a href="mailto:openstack-operators@lists.openstack.org">openstack-operators@lists.openstack.org</a>"<br>
> <<a href="mailto:openstack-operators@lists.openstack.org">openstack-operators@lists.openstack.org</a>><br>
> Subject: [Openstack-operators] Quantum Security Groups not working -<br>
> iptables rules are not Evaluated<br>
> Message-ID: <<a href="mailto:CE4A6242.49182%25porombka@uni-paderborn.de">CE4A6242.49182%porombka@uni-paderborn.de</a>><br>
> Content-Type: text/plain; charset="iso-8859-1"<br>
><br>
> Hi folks.<br>
><br>
> We're currently on the way to deploy an openstack (grizzly) cloud<br>
> environment<br>
> and suffering in problems implementing the security groups like described<br>
> in<br>
> [1].<br>
><br>
> The (hopefully) relevant configuration settings are:<br>
><br>
> /etc/nova/nova.conf<br>
> [?]<br>
> security_group_api=quantum<br>
> network_api_class=nova.network.quantumv2.api.API<br>
> libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver<br>
> firewall_driver=nova.virt.firewall.NoopFirewallDriver<br>
> [?]<br>
><br>
> /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini<br>
> [?]<br>
> firewall_driver =<br>
> quantum.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver<br>
> [?]<br>
><br>
> The Networks for the vm's are attached to the compute-nodes via VLAN<br>
> encapsulation and correctly mapped to the vm's.<br>
><br>
> >From our point of view - we're understanding the need of the<br>
> "ovs-bridge <> veth glue <> linux-bridge (for filtering) <><br>
> vm"-construction<br>
> and observed the single components in our deployment. See [2]<br>
><br>
> Everything is working except the security groups.<br>
> We observed that ip-tables rules are generated for the quantum-openvswi-*<br>
> chains of iptables.<br>
> And the traffic arriving untagged (native vlan for management) on the<br>
> machine is processed by iptables but not<br>
> the traffic which arrived encapsulated.<br>
><br>
> The traffic which is unpacked by openvswitch and is bridged via the veth<br>
> and<br>
> the tap into<br>
> the machine isn't processed by the iptables rules.<br>
><br>
> We have no remaining clue/idea how to solve this issue? :(<br>
><br>
> Greetings<br>
> Sebastian<br>
><br>
> [1]<br>
><br>
> <a href="http://docs.openstack.org/trunk/openstack-network/admin/content/under_the_ho" target="_blank">http://docs.openstack.org/trunk/openstack-network/admin/content/under_the_ho</a><br>
> od_openvswitch.html<br>
> [2] <a href="http://pastebin.com/WXMH6y4A" target="_blank">http://pastebin.com/WXMH6y4A</a><br>
><br>
> --<br>
> Sebastian Porombka, M.Sc.<br>
> Zentrum f?r Informations- und Medientechnologien (IMT)<br>
> Universit?t Paderborn<br>
><br>
> E-Mail: <a href="mailto:porombka@uni-paderborn.de">porombka@uni-paderborn.de</a><br>
> Tel.: 05251/60-5999<br>
> Fax: 05251/60-48-5999<br>
> Raum: N5.314<br>
><br>
> --------------------------------------------<br>
> Q: Why is this email five sentences or less?<br>
> A: <a href="http://five.sentenc.es" target="_blank">http://five.sentenc.es</a> <<a href="http://five.sentenc.es/" target="_blank">http://five.sentenc.es/</a>><br>
><br>
> Please consider the environment before printing this email.<br>
><br>
><br>
> -------------- next part --------------<br>
> An HTML attachment was scrubbed...<br>
> URL: <<br>
> <a href="http://lists.openstack.org/pipermail/openstack-operators/attachments/20130902/83f01473/attachment-0001.html" target="_blank">http://lists.openstack.org/pipermail/openstack-operators/attachments/20130902/83f01473/attachment-0001.html</a><br>
> ><br>
> -------------- next part --------------<br>
> A non-text attachment was scrubbed...<br>
> Name: smime.p7s<br>
> Type: application/pkcs7-signature<br>
> Size: 5443 bytes<br>
> Desc: not available<br>
> URL: <<br>
> <a href="http://lists.openstack.org/pipermail/openstack-operators/attachments/20130902/83f01473/attachment-0001.bin" target="_blank">http://lists.openstack.org/pipermail/openstack-operators/attachments/20130902/83f01473/attachment-0001.bin</a><br>
> ><br>
><br>
> ------------------------------<br>
><br>
> Message: 2<br>
> Date: Mon, 2 Sep 2013 15:21:10 +0100 (BST)<br>
> From: Darragh O'Reilly <<a href="mailto:dara2002-openstack@yahoo.com">dara2002-openstack@yahoo.com</a>><br>
> To: Sebastian Porombka <<a href="mailto:porombka@uni-paderborn.de">porombka@uni-paderborn.de</a>>,<br>
> "<a href="mailto:openstack-operators@lists.openstack.org">openstack-operators@lists.openstack.org</a>"<br>
> <<a href="mailto:openstack-operators@lists.openstack.org">openstack-operators@lists.openstack.org</a>><br>
> Subject: Re: [Openstack-operators] Quantum Security Groups not working<br>
> - iptables rules are not Evaluated<br>
> Message-ID:<br>
> <<a href="mailto:1378131670.96925.YahooMailNeo@web172405.mail.ir2.yahoo.com">1378131670.96925.YahooMailNeo@web172405.mail.ir2.yahoo.com</a>><br>
> Content-Type: text/plain; charset=utf-8<br>
><br>
><br>
> it is not working because you are using the ovs bridge compatibility<br>
> module.<br>
><br>
> Re,<br>
> Darragh.<br>
><br>
> >________________________________<br>
> > From: Sebastian Porombka <<a href="mailto:porombka@uni-paderborn.de">porombka@uni-paderborn.de</a>><br>
> >To: "<a href="mailto:openstack-operators@lists.openstack.org">openstack-operators@lists.openstack.org</a>" <<br>
> <a href="mailto:openstack-operators@lists.openstack.org">openstack-operators@lists.openstack.org</a>><br>
> >Sent: Monday, 2 September 2013, 14:48<br>
> >Subject: [Openstack-operators] Quantum Security Groups not working -<br>
> iptables rules are not Evaluated<br>
> ><br>
> ><br>
> ><br>
> >Hi folks.<br>
> ><br>
> ><br>
> >We're currently on the way to deploy an openstack (grizzly) cloud<br>
> environment?<br>
> >and suffering in problems implementing the security groups like described<br>
> in [1].<br>
> ><br>
> ><br>
> >The (hopefully) relevant configuration settings are:<br>
> ><br>
> ><br>
> >/etc/nova/nova.conf<br>
> >[?]<br>
> >security_group_api=quantum<br>
> >network_api_class=nova.network.quantumv2.api.API<br>
> >libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver<br>
> >firewall_driver=nova.virt.firewall.NoopFirewallDriver<br>
> >[?]<br>
> ><br>
> ><br>
> >/etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini<br>
> >[?]<br>
> >firewall_driver =<br>
> quantum.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver<br>
> >[?]<br>
> ><br>
> ><br>
> >The Networks for the vm's are attached to the compute-nodes via VLAN?<br>
> >encapsulation and correctly mapped to the vm's.<br>
> ><br>
> ><br>
> >From our point of view - we're understanding the need of the?<br>
> >"ovs-bridge <> veth glue <> linux-bridge (for filtering) <><br>
> vm"-construction?<br>
> >and observed the single components in our deployment. See [2]<br>
> ><br>
> ><br>
> >Everything is working except the security groups.?<br>
> >We observed that ip-tables rules are generated for the?quantum-openvswi-*<br>
> chains of iptables.?<br>
> >And the traffic arriving untagged (native vlan for management) on the<br>
> machine is processed by iptables but not?<br>
> >the traffic which arrived encapsulated.<br>
> ><br>
> ><br>
> >The traffic which is unpacked by openvswitch and is bridged via the veth<br>
> and the tap into?<br>
> >the machine isn't processed by the iptables rules.<br>
> ><br>
> ><br>
> >We have no remaining clue/idea how to solve this issue? :(<br>
> ><br>
> ><br>
> >Greetings<br>
> >? ?Sebastian<br>
> ><br>
> ><br>
> >[1]?<br>
> <a href="http://docs.openstack.org/trunk/openstack-network/admin/content/under_the_hood_openvswitch.html" target="_blank">http://docs.openstack.org/trunk/openstack-network/admin/content/under_the_hood_openvswitch.html</a><br>
> >[2]?<a href="http://pastebin.com/WXMH6y4A" target="_blank">http://pastebin.com/WXMH6y4A</a><br>
> ><br>
> ><br>
> >--<br>
> >Sebastian Porombka,?M.Sc.?<br>
> >Zentrum f?r Informations- und Medientechnologien (IMT)<br>
> >Universit?t Paderborn<br>
> ><br>
> ><br>
> >E-Mail:?<a href="mailto:porombka@uni-paderborn.de">porombka@uni-paderborn.de</a><br>
> >Tel.: 05251/60-5999<br>
> >Fax:?05251/60-48-5999<br>
> >Raum: N5.314?<br>
> ><br>
> ><br>
> >--------------------------------------------<br>
> >Q: Why is this email five sentences or less?<br>
> >A:?<a href="http://five.sentenc.es" target="_blank">http://five.sentenc.es</a><br>
> ><br>
> ><br>
> >Please consider the environment before printing this email.<br>
> >_______________________________________________<br>
> >OpenStack-operators mailing list<br>
> ><a href="mailto:OpenStack-operators@lists.openstack.org">OpenStack-operators@lists.openstack.org</a><br>
> ><a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators</a><br>
> ><br>
> ><br>
> ><br>
><br>
><br>
><br>
> ------------------------------<br>
><br>
> Message: 3<br>
> Date: Mon, 2 Sep 2013 14:41:34 +0000<br>
> From: Robert van Leeuwen <<a href="mailto:Robert.vanLeeuwen@spilgames.com">Robert.vanLeeuwen@spilgames.com</a>><br>
> To: "<a href="mailto:openstack-operators@lists.openstack.org">openstack-operators@lists.openstack.org</a>"<br>
> <<a href="mailto:openstack-operators@lists.openstack.org">openstack-operators@lists.openstack.org</a>><br>
> Subject: [Openstack-operators] Running DHCP agent in HA (grizzly)<br>
> Message-ID:<br>
> <79E00D9220302D448C1D59B5224ED12D8D9C7AE1@EchoDB02.spil.local><br>
> Content-Type: text/plain; charset="us-ascii"<br>
><br>
> Hi,<br>
><br>
> How would one make sure multiple dhcp agents run for the same segment?<br>
> I currently have 2 dhcp agents running but only one is added to a network<br>
> when the network is created.<br>
><br>
> When I manually add the other one with "quantum dhcp-agent-network-add"<br>
> but I would like this to happen automatically.<br>
><br>
> Thx,<br>
> Robert<br>
><br>
><br>
><br>
><br>
> ------------------------------<br>
><br>
> Message: 4<br>
> Date: Mon, 2 Sep 2013 11:00:41 -0400<br>
> From: Lorin Hochstein <<a href="mailto:lorin@nimbisservices.com">lorin@nimbisservices.com</a>><br>
> To: "Darragh O'Reilly" <<a href="mailto:dara2002-openstack@yahoo.com">dara2002-openstack@yahoo.com</a>><br>
> Cc: "<a href="mailto:openstack-operators@lists.openstack.org">openstack-operators@lists.openstack.org</a>"<br>
> <<a href="mailto:openstack-operators@lists.openstack.org">openstack-operators@lists.openstack.org</a>><br>
> Subject: Re: [Openstack-operators] Quantum Security Groups not working<br>
> - iptables rules are not Evaluated<br>
> Message-ID:<br>
> <<br>
> <a href="mailto:CADzpNMXaPHQCdgjCwOJDK-fC4E6KF14K_gvr5AHnwknEwH4N8w@mail.gmail.com">CADzpNMXaPHQCdgjCwOJDK-fC4E6KF14K_gvr5AHnwknEwH4N8w@mail.gmail.com</a>><br>
> Content-Type: text/plain; charset="windows-1252"<br>
><br>
> Darragh:<br>
><br>
> Can you elaborate on this a little more? Do you mean that the "brcompat"<br>
> kernel module has been loaded, and this breaks security groups with the ovs<br>
> plugin? Should we add something in the documentation about this?<br>
><br>
> Lorin<br>
><br>
><br>
> Do you mean that the problem is that the ovs-brcompatd service is running?<br>
><br>
> openvswitch-brcompat package is installed?<br>
><br>
><br>
> On Mon, Sep 2, 2013 at 10:21 AM, Darragh O'Reilly <<br>
> <a href="mailto:dara2002-openstack@yahoo.com">dara2002-openstack@yahoo.com</a>> wrote:<br>
><br>
> ><br>
> > it is not working because you are using the ovs bridge compatibility<br>
> > module.<br>
> ><br>
> > Re,<br>
> > Darragh.<br>
> ><br>
> > >________________________________<br>
> > > From: Sebastian Porombka <<a href="mailto:porombka@uni-paderborn.de">porombka@uni-paderborn.de</a>><br>
> > >To: "<a href="mailto:openstack-operators@lists.openstack.org">openstack-operators@lists.openstack.org</a>" <<br>
> > <a href="mailto:openstack-operators@lists.openstack.org">openstack-operators@lists.openstack.org</a>><br>
> > >Sent: Monday, 2 September 2013, 14:48<br>
> > >Subject: [Openstack-operators] Quantum Security Groups not working -<br>
> > iptables rules are not Evaluated<br>
> > ><br>
> > ><br>
> > ><br>
> > >Hi folks.<br>
> > ><br>
> > ><br>
> > >We're currently on the way to deploy an openstack (grizzly) cloud<br>
> > environment<br>
> > >and suffering in problems implementing the security groups like<br>
> described<br>
> > in [1].<br>
> > ><br>
> > ><br>
> > >The (hopefully) relevant configuration settings are:<br>
> > ><br>
> > ><br>
> > >/etc/nova/nova.conf<br>
> > >[?]<br>
> > >security_group_api=quantum<br>
> > >network_api_class=nova.network.quantumv2.api.API<br>
> > >libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver<br>
> > >firewall_driver=nova.virt.firewall.NoopFirewallDriver<br>
> > >[?]<br>
> > ><br>
> > ><br>
> > >/etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini<br>
> > >[?]<br>
> > >firewall_driver =<br>
> > quantum.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver<br>
> > >[?]<br>
> > ><br>
> > ><br>
> > >The Networks for the vm's are attached to the compute-nodes via VLAN<br>
> > >encapsulation and correctly mapped to the vm's.<br>
> > ><br>
> > ><br>
> > >From our point of view - we're understanding the need of the<br>
> > >"ovs-bridge <> veth glue <> linux-bridge (for filtering) <><br>
> > vm"-construction<br>
> > >and observed the single components in our deployment. See [2]<br>
> > ><br>
> > ><br>
> > >Everything is working except the security groups.<br>
> > >We observed that ip-tables rules are generated for the<br>
> quantum-openvswi-*<br>
> > chains of iptables.<br>
> > >And the traffic arriving untagged (native vlan for management) on the<br>
> > machine is processed by iptables but not<br>
> > >the traffic which arrived encapsulated.<br>
> > ><br>
> > ><br>
> > >The traffic which is unpacked by openvswitch and is bridged via the veth<br>
> > and the tap into<br>
> > >the machine isn't processed by the iptables rules.<br>
> > ><br>
> > ><br>
> > >We have no remaining clue/idea how to solve this issue? :(<br>
> > ><br>
> > ><br>
> > >Greetings<br>
> > > Sebastian<br>
> > ><br>
> > ><br>
> > >[1]<br>
> ><br>
> <a href="http://docs.openstack.org/trunk/openstack-network/admin/content/under_the_hood_openvswitch.html" target="_blank">http://docs.openstack.org/trunk/openstack-network/admin/content/under_the_hood_openvswitch.html</a><br>
> > >[2] <a href="http://pastebin.com/WXMH6y4A" target="_blank">http://pastebin.com/WXMH6y4A</a><br>
> > ><br>
> > ><br>
> > >--<br>
> > >Sebastian Porombka, M.Sc.<br>
> > >Zentrum f?r Informations- und Medientechnologien (IMT)<br>
> > >Universit?t Paderborn<br>
> > ><br>
> > ><br>
> > >E-Mail: <a href="mailto:porombka@uni-paderborn.de">porombka@uni-paderborn.de</a><br>
> > >Tel.: 05251/60-5999<br>
> > >Fax: 05251/60-48-5999<br>
> > >Raum: N5.314<br>
> > ><br>
> > ><br>
> > >--------------------------------------------<br>
> > >Q: Why is this email five sentences or less?<br>
> > >A: <a href="http://five.sentenc.es" target="_blank">http://five.sentenc.es</a><br>
> > ><br>
> > ><br>
> > >Please consider the environment before printing this email.<br>
> > >_______________________________________________<br>
> > >OpenStack-operators mailing list<br>
> > ><a href="mailto:OpenStack-operators@lists.openstack.org">OpenStack-operators@lists.openstack.org</a><br>
> > ><a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators</a><br>
> > ><br>
> > ><br>
> > ><br>
> ><br>
> > _______________________________________________<br>
> > OpenStack-operators mailing list<br>
> > <a href="mailto:OpenStack-operators@lists.openstack.org">OpenStack-operators@lists.openstack.org</a><br>
> > <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators</a><br>
> ><br>
><br>
><br>
><br>
> --<br>
> Lorin Hochstein<br>
> Lead Architect - Cloud Services<br>
> Nimbis Services, Inc.<br>
> <a href="http://www.nimbisservices.com" target="_blank">www.nimbisservices.com</a><br>
> -------------- next part --------------<br>
> An HTML attachment was scrubbed...<br>
> URL: <<br>
> <a href="http://lists.openstack.org/pipermail/openstack-operators/attachments/20130902/60036073/attachment-0001.html" target="_blank">http://lists.openstack.org/pipermail/openstack-operators/attachments/20130902/60036073/attachment-0001.html</a><br>
> ><br>
><br>
> ------------------------------<br>
><br>
> Message: 5<br>
> Date: Mon, 2 Sep 2013 17:09:18 +0200<br>
> From: Simon Pasquier <<a href="mailto:simon.pasquier@bull.net">simon.pasquier@bull.net</a>><br>
> To: <<a href="mailto:openstack-operators@lists.openstack.org">openstack-operators@lists.openstack.org</a>><br>
> Subject: Re: [Openstack-operators] Running DHCP agent in HA (grizzly)<br>
> Message-ID: <<a href="mailto:5224AA1E.60201@bull.net">5224AA1E.60201@bull.net</a>><br>
> Content-Type: text/plain; charset="ISO-8859-1"; format=flowed<br>
><br>
> Hello,<br>
> I guess you are running Grizzly. It requires Havana to be able to<br>
> schedule automatically multiple DHCP agents per network.<br>
> Simon<br>
><br>
> Le 02/09/2013 16:41, Robert van Leeuwen a ?crit :<br>
> > Hi,<br>
> ><br>
> > How would one make sure multiple dhcp agents run for the same segment?<br>
> > I currently have 2 dhcp agents running but only one is added to a<br>
> network when the network is created.<br>
> ><br>
> > When I manually add the other one with "quantum dhcp-agent-network-add"<br>
> but I would like this to happen automatically.<br>
> ><br>
> > Thx,<br>
> > Robert<br>
> ><br>
> ><br>
> > _______________________________________________<br>
> > OpenStack-operators mailing list<br>
> > <a href="mailto:OpenStack-operators@lists.openstack.org">OpenStack-operators@lists.openstack.org</a><br>
> > <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators</a><br>
> ><br>
><br>
><br>
> --<br>
> Simon Pasquier<br>
> Software Engineer<br>
> Bull, Architect of an Open World<br>
> Phone: + 33 4 76 29 71 49<br>
> <a href="http://www.bull.com" target="_blank">http://www.bull.com</a><br>
><br>
><br>
><br>
> ------------------------------<br>
><br>
> Message: 6<br>
> Date: Mon, 2 Sep 2013 16:28:37 +0100 (BST)<br>
> From: Darragh O'Reilly <<a href="mailto:dara2002-openstack@yahoo.com">dara2002-openstack@yahoo.com</a>><br>
> To: Lorin Hochstein <<a href="mailto:lorin@nimbisservices.com">lorin@nimbisservices.com</a>><br>
> Cc: "<a href="mailto:openstack-operators@lists.openstack.org">openstack-operators@lists.openstack.org</a>"<br>
> <<a href="mailto:openstack-operators@lists.openstack.org">openstack-operators@lists.openstack.org</a>><br>
> Subject: Re: [Openstack-operators] Quantum Security Groups not working<br>
> - iptables rules are not Evaluated<br>
> Message-ID:<br>
> <<a href="mailto:1378135717.92851.YahooMailNeo@web172406.mail.ir2.yahoo.com">1378135717.92851.YahooMailNeo@web172406.mail.ir2.yahoo.com</a>><br>
> Content-Type: text/plain; charset=utf-8<br>
><br>
> Hi Lorin,<br>
><br>
> sure, sorry for the beverity. It seems the brcompat is being used because<br>
> qbr0188455b-25 appears in the 'ovs-vsctl show' output - so it was created<br>
> as an OVS bridge, but it should have been created as a Linux bridge. I have<br>
> never used brcompat, but I believe it intercepts calls from brctl and<br>
> configures OVS bridges instead of Linux bridges. I'm not sure how to<br>
> uninstall/disable it - it's probably an Operating System package.<br>
><br>
> I don't think any Openstack doc says to install/enable it.<br>
><br>
> Re,<br>
> Darragh.<br>
><br>
> >________________________________<br>
> > From: Lorin Hochstein <<a href="mailto:lorin@nimbisservices.com">lorin@nimbisservices.com</a>><br>
> >To: Darragh O'Reilly <<a href="mailto:dara2002-openstack@yahoo.com">dara2002-openstack@yahoo.com</a>><br>
> >Cc: Sebastian Porombka <<a href="mailto:porombka@uni-paderborn.de">porombka@uni-paderborn.de</a>>; "<br>
> <a href="mailto:openstack-operators@lists.openstack.org">openstack-operators@lists.openstack.org</a>" <<br>
> <a href="mailto:openstack-operators@lists.openstack.org">openstack-operators@lists.openstack.org</a>><br>
> >Sent: Monday, 2 September 2013, 16:00<br>
> >Subject: Re: [Openstack-operators] Quantum Security Groups not working -<br>
> iptables rules are not Evaluated<br>
> ><br>
> ><br>
> ><br>
> >Darragh:<br>
> ><br>
> ><br>
> >Can you elaborate on this a little more? Do you mean that the "brcompat"<br>
> kernel module has been loaded, and this breaks security groups with the ovs<br>
> plugin? Should we add something in the documentation about this??<br>
> ><br>
> ><br>
> >Lorin<br>
> ><br>
> ><br>
> ><br>
> ><br>
> >Do you mean that the problem is that the ovs-brcompatd service is<br>
> running??<br>
> ><br>
> ><br>
> >openvswitch-brcompat package is installed??<br>
> ><br>
> ><br>
> ><br>
> >On Mon, Sep 2, 2013 at 10:21 AM, Darragh O'Reilly <<br>
> <a href="mailto:dara2002-openstack@yahoo.com">dara2002-openstack@yahoo.com</a>> wrote:<br>
> ><br>
> ><br>
> >>it is not working because you are using the ovs bridge compatibility<br>
> module.<br>
> >><br>
> >>Re,<br>
> >>Darragh.<br>
> >><br>
> >>>________________________________<br>
> >>> From: Sebastian Porombka <<a href="mailto:porombka@uni-paderborn.de">porombka@uni-paderborn.de</a>><br>
> >>>To: "<a href="mailto:openstack-operators@lists.openstack.org">openstack-operators@lists.openstack.org</a>" <<br>
> <a href="mailto:openstack-operators@lists.openstack.org">openstack-operators@lists.openstack.org</a>><br>
> >>>Sent: Monday, 2 September 2013, 14:48<br>
> >>>Subject: [Openstack-operators] Quantum Security Groups not working -<br>
> iptables rules are not Evaluated<br>
> >><br>
> >>><br>
> >>><br>
> >>><br>
> >>>Hi folks.<br>
> >>><br>
> >>><br>
> >>>We're currently on the way to deploy an openstack (grizzly) cloud<br>
> environment?<br>
> >>>and suffering in problems implementing the security groups like<br>
> described in [1].<br>
> >>><br>
> >>><br>
> >>>The (hopefully) relevant configuration settings are:<br>
> >>><br>
> >>><br>
> >>>/etc/nova/nova.conf<br>
> >>>[?]<br>
> >>>security_group_api=quantum<br>
> >>>network_api_class=nova.network.quantumv2.api.API<br>
> >>>libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver<br>
> >>>firewall_driver=nova.virt.firewall.NoopFirewallDriver<br>
> >>>[?]<br>
> >>><br>
> >>><br>
> >>>/etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini<br>
> >>>[?]<br>
> >>>firewall_driver =<br>
> quantum.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver<br>
> >>>[?]<br>
> >>><br>
> >>><br>
> >>>The Networks for the vm's are attached to the compute-nodes via VLAN?<br>
> >>>encapsulation and correctly mapped to the vm's.<br>
> >>><br>
> >>><br>
> >>>From our point of view - we're understanding the need of the?<br>
> >>>"ovs-bridge <> veth glue <> linux-bridge (for filtering) <><br>
> vm"-construction?<br>
> >>>and observed the single components in our deployment. See [2]<br>
> >>><br>
> >>><br>
> >>>Everything is working except the security groups.?<br>
> >>>We observed that ip-tables rules are generated for<br>
> the?quantum-openvswi-* chains of iptables.?<br>
> >>>And the traffic arriving untagged (native vlan for management) on the<br>
> machine is processed by iptables but not?<br>
> >>>the traffic which arrived encapsulated.<br>
> >>><br>
> >>><br>
> >>>The traffic which is unpacked by openvswitch and is bridged via the<br>
> veth and the tap into?<br>
> >>>the machine isn't processed by the iptables rules.<br>
> >>><br>
> >>><br>
> >>>We have no remaining clue/idea how to solve this issue? :(<br>
> >>><br>
> >>><br>
> >>>Greetings<br>
> >>>? ?Sebastian<br>
> >>><br>
> >>><br>
> >>>[1]?<br>
> <a href="http://docs.openstack.org/trunk/openstack-network/admin/content/under_the_hood_openvswitch.html" target="_blank">http://docs.openstack.org/trunk/openstack-network/admin/content/under_the_hood_openvswitch.html</a><br>
> >>>[2]?<a href="http://pastebin.com/WXMH6y4A" target="_blank">http://pastebin.com/WXMH6y4A</a><br>
> >>><br>
> >>><br>
> >>>--<br>
> >>>Sebastian Porombka,?M.Sc.?<br>
> >>>Zentrum f?r Informations- und Medientechnologien (IMT)<br>
> >>>Universit?t Paderborn<br>
> >>><br>
> >>><br>
> >>>E-Mail:?<a href="mailto:porombka@uni-paderborn.de">porombka@uni-paderborn.de</a><br>
> >>>Tel.: 05251/60-5999<br>
> >>>Fax:?05251/60-48-5999<br>
> >>>Raum: N5.314?<br>
> >>><br>
> >>><br>
> >>>--------------------------------------------<br>
> >>>Q: Why is this email five sentences or less?<br>
> >>>A:?<a href="http://five.sentenc.es" target="_blank">http://five.sentenc.es</a><br>
> >>><br>
> >>><br>
> >>>Please consider the environment before printing this email.<br>
> >>>_______________________________________________<br>
> >>>OpenStack-operators mailing list<br>
> >>><a href="mailto:OpenStack-operators@lists.openstack.org">OpenStack-operators@lists.openstack.org</a><br>
> >>><a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators</a><br>
> >>><br>
> >>><br>
> >>><br>
> >><br>
> >>_______________________________________________<br>
> >>OpenStack-operators mailing list<br>
> >><a href="mailto:OpenStack-operators@lists.openstack.org">OpenStack-operators@lists.openstack.org</a><br>
> >><a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators</a><br>
> >><br>
> ><br>
> ><br>
> ><br>
> >--<br>
> ><br>
> >Lorin Hochstein<br>
> ><br>
> >Lead Architect - Cloud Services<br>
> >Nimbis Services, Inc.<br>
> ><a href="http://www.nimbisservices.com" target="_blank">www.nimbisservices.com</a><br>
> ><br>
> ><br>
><br>
><br>
><br>
> ------------------------------<br>
><br>
> Message: 7<br>
> Date: Mon, 2 Sep 2013 12:51:55 -0300<br>
> From: Juan Jos? Pavlik Salles <<a href="mailto:jjpavlik@gmail.com">jjpavlik@gmail.com</a>><br>
> To: "<a href="mailto:openstack-operators@lists.openstack.org">openstack-operators@lists.openstack.org</a>"<br>
> <<a href="mailto:openstack-operators@lists.openstack.org">openstack-operators@lists.openstack.org</a>><br>
> Subject: [Openstack-operators] Migrating instances in grizzly<br>
> Message-ID:<br>
> <<br>
> <a href="mailto:CAKCETkePaRLJz4U96QLhB4_qFgs9R_YUjdVtf4yq8Jbdepoquw@mail.gmail.com">CAKCETkePaRLJz4U96QLhB4_qFgs9R_YUjdVtf4yq8Jbdepoquw@mail.gmail.com</a>><br>
> Content-Type: text/plain; charset="iso-8859-1"<br>
><br>
> Hi guys, last friday i started testing live-migration in my grizzly cloud<br>
> with shared storage (gfs2) but i run into a problem, a little weird:<br>
><br>
> This is the status before migrating:<br>
><br>
> -I've p9 instances also called instance-00000022 running on "acelga"<br>
> compute node.<br>
><br>
> *root@acelga:~/tools# virsh list*<br>
> * Id Name State*<br>
> *----------------------------------------------------*<br>
> * 6 instance-00000022 running*<br>
> *<br>
> *<br>
> *root@acelga:~/tools# *<br>
> *<br>
> *<br>
> *<br>
> *<br>
> *root@cebolla:~/tool# virsh list*<br>
> * Id Nombre Estado*<br>
> *----------------------------------------------------*<br>
> *<br>
> *<br>
> *root@cebolla:~/tool# *<br>
><br>
> -Here you can see all the info about the instance<br>
><br>
> *root@cebolla:~/tool# nova --os-username=noc-admin --os-tenant-name=noc<br>
> --os-password=XXXXXXX --os-auth-url <a href="http://172.19.136.1:35357/v2.0" target="_blank">http://172.19.136.1:35357/v2.0</a> show<br>
> de2bcbed-f7b6-40cd-89ca-acf6fe2f2d09*<br>
> *<br>
><br>
> +-------------------------------------+-----------------------------------------------------------+<br>
> *<br>
> *| Property | Value<br>
> |*<br>
> *<br>
><br>
> +-------------------------------------+-----------------------------------------------------------+<br>
> *<br>
> *| status | ACTIVE<br>
> |*<br>
> *| updated | 2013-09-02T15:27:39Z<br>
> |*<br>
> *| OS-EXT-STS:task_state | None<br>
> |*<br>
> *| OS-EXT-SRV-ATTR:host | acelga<br>
> |*<br>
> *| key_name | None<br>
> |*<br>
> *| image | Ubuntu 12.04.2 LTS<br>
> (1359ca8d-23a2-40e8-940f-d90b3e68bb39) |*<br>
> *| vlan1 network | 172.16.16.175<br>
> |*<br>
> *| hostId |<br>
> 81be94870821e17e327d92e9c80548ffcdd37d24054a235116669f53 |*<br>
> *| OS-EXT-STS:vm_state | active<br>
> |*<br>
> *| OS-EXT-SRV-ATTR:instance_name | instance-00000022<br>
> |*<br>
> *| OS-EXT-SRV-ATTR:hypervisor_hostname | <a href="http://acelga.psi.unc.edu.ar" target="_blank">acelga.psi.unc.edu.ar</a><br>
> |*<br>
> *| flavor | m1.tiny (1)<br>
> |*<br>
> *| id |<br>
> de2bcbed-f7b6-40cd-89ca-acf6fe2f2d09 |*<br>
> *| security_groups | [{u'name': u'default'}]<br>
> |*<br>
> *| user_id | 20390b639d4449c18926dca5e038ec5e<br>
> |*<br>
> *| name | p9<br>
> |*<br>
> *| created | 2013-09-02T15:27:06Z<br>
> |*<br>
> *| tenant_id | d1e3aae242f14c488d2225dcbf1e96d6<br>
> |*<br>
> *| OS-DCF:diskConfig | MANUAL<br>
> |*<br>
> *| metadata | {}<br>
> |*<br>
> *| accessIPv4 |<br>
> |*<br>
> *| accessIPv6 |<br>
> |*<br>
> *| progress | 0<br>
> |*<br>
> *| OS-EXT-STS:power_state | 1<br>
> |*<br>
> *| OS-EXT-AZ:availability_zone | nova<br>
> |*<br>
> *| config_drive |<br>
> |*<br>
> *<br>
><br>
> +-------------------------------------+-----------------------------------------------------------+<br>
> *<br>
> *root@cebolla:~/tool#*<br>
><br>
> -So i try to move it to the other node "cebolla"<br>
><br>
> *root@acelga:~/tools# nova --os-username=noc-admin --os-tenant-name=noc<br>
> --os-password=HjZ5V9yj --os-auth-url<br>
> <a href="http://172.19.136.1:35357/v2.0live-migration" target="_blank">http://172.19.136.1:35357/v2.0live-migration</a><br>
> de2bcbed-f7b6-40cd-89ca-acf6fe2f2d09 cebolla<br>
> *<br>
> *root@acelga:~/tools# virsh list*<br>
> * Id Name State*<br>
> *----------------------------------------------------*<br>
> *<br>
> *<br>
> *root@acelga:~/tools#*<br>
><br>
> No error messages at all on "acelga" compute node so far. If i check the<br>
> other node i can see the instance've been migrated<br>
><br>
> *root@cebolla:~/tool# virsh list*<br>
> * Id Nombre Estado*<br>
> *----------------------------------------------------*<br>
> * 11 instance-00000022 ejecutando*<br>
> *<br>
> *<br>
> *root@cebolla:~/tool#*<br>
><br>
><br>
> -BUT... after a few seconds i get this on "acelga"'s nova-compute.log<br>
><br>
><br>
> *2013-09-02 15:35:45.784 4601 DEBUG nova.openstack.common.rpc.common [-]<br>
> Timed out waiting for RPC response: timed out _error_callback<br>
><br>
> /usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/impl_kombu.py:628<br>
> *<br>
> *2013-09-02 15:35:45.790 4601 ERROR nova.utils [-] in fixed duration<br>
> looping call*<br>
> *2013-09-02 15:35:45.790 4601 TRACE nova.utils Traceback (most recent call<br>
> last):*<br>
> *2013-09-02 15:35:45.790 4601 TRACE nova.utils File<br>
> "/usr/lib/python2.7/dist-packages/nova/utils.py", line 594, in _inner*<br>
> *2013-09-02 15:35:45.790 4601 TRACE nova.utils self.f(*self.args, **<br>
> <a href="http://self.kw" target="_blank">self.kw</a>)*<br>
> *2013-09-02 15:35:45.790 4601 TRACE nova.utils File<br>
> "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 3129,<br>
> in wait_for_live_migration*<br>
> *2013-09-02 15:35:45.790 4601 TRACE nova.utils migrate_data)*<br>
> *2013-09-02 15:35:45.790 4601 TRACE nova.utils File<br>
> "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 3208, in<br>
> _post_live_migration*<br>
> *2013-09-02 15:35:45.790 4601 TRACE nova.utils migration)*<br>
> *2013-09-02 15:35:45.790 4601 TRACE nova.utils File<br>
> "/usr/lib/python2.7/dist-packages/nova/conductor/api.py", line 664, in<br>
> network_migrate_instance_start*<br>
> *2013-09-02 15:35:45.790 4601 TRACE nova.utils migration)*<br>
> *2013-09-02 15:35:45.790 4601 TRACE nova.utils File<br>
> "/usr/lib/python2.7/dist-packages/nova/conductor/rpcapi.py", line 415, in<br>
> network_migrate_instance_start*<br>
> *2013-09-02 15:35:45.790 4601 TRACE nova.utils return<br>
> self.call(context, msg, version='1.41')*<br>
> *2013-09-02 15:35:45.790 4601 TRACE nova.utils File<br>
> "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/proxy.py", line<br>
> 80, in call*<br>
> *2013-09-02 15:35:45.790 4601 TRACE nova.utils return rpc.call(context,<br>
> self._get_topic(topic), msg, timeout)*<br>
> *2013-09-02 15:35:45.790 4601 TRACE nova.utils File<br>
> "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/__init__.py",<br>
> line 140, in call*<br>
> *2013-09-02 15:35:45.790 4601 TRACE nova.utils return<br>
> _get_impl().call(CONF, context, topic, msg, timeout)*<br>
> *2013-09-02 15:35:45.790 4601 TRACE nova.utils File<br>
> "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/impl_kombu.py",<br>
> line 798, in call*<br>
> *2013-09-02 15:35:45.790 4601 TRACE nova.utils<br>
> rpc_amqp.get_connection_pool(conf, Connection))*<br>
> *2013-09-02 15:35:45.790 4601 TRACE nova.utils File<br>
> "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py", line<br>
> 612, in call*<br>
> *2013-09-02 15:35:45.790 4601 TRACE nova.utils rv = list(rv)*<br>
> *2013-09-02 15:35:45.790 4601 TRACE nova.utils File<br>
> "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py", line<br>
> 554, in __iter__*<br>
> *2013-09-02 15:35:45.790 4601 TRACE nova.utils self.done()*<br>
> *2013-09-02 15:35:45.790 4601 TRACE nova.utils File<br>
> "/usr/lib/python2.7/contextlib.py", line 24, in __exit__*<br>
> *2013-09-02 15:35:45.790 4601 TRACE nova.utils self.gen.next()*<br>
> *2013-09-02 15:35:45.790 4601 TRACE nova.utils File<br>
> "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py", line<br>
> 551, in __iter__*<br>
> *2013-09-02 15:35:45.790 4601 TRACE nova.utils self._iterator.next()*<br>
> *2013-09-02 15:35:45.790 4601 TRACE nova.utils File<br>
> "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/impl_kombu.py",<br>
> line 648, in iterconsume*<br>
> *2013-09-02 15:35:45.790 4601 TRACE nova.utils yield<br>
> self.ensure(_error_callback, _consume)*<br>
> *2013-09-02 15:35:45.790 4601 TRACE nova.utils File<br>
> "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/impl_kombu.py",<br>
> line 566, in ensure*<br>
> *2013-09-02 15:35:45.790 4601 TRACE nova.utils error_callback(e)*<br>
> *2013-09-02 15:35:45.790 4601 TRACE nova.utils File<br>
> "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/impl_kombu.py",<br>
> line 629, in _error_callback*<br>
> *2013-09-02 15:35:45.790 4601 TRACE nova.utils raise<br>
> rpc_common.Timeout()*<br>
> *2013-09-02 15:35:45.790 4601 TRACE nova.utils Timeout: Timeout while<br>
> waiting on RPC response.*<br>
> *2013-09-02 15:35:45.790 4601 TRACE nova.utils*<br>
><br>
><br>
> -And the VM state never changes back to ACTIVE from MIGRATING:<br>
><br>
><br>
> *root@cebolla:~/tool# nova --os-username=noc-admin --os-tenant-name=noc<br>
> --os-password=XXXXX --os-auth-url <a href="http://172.19.136.1:35357/v2.0" target="_blank">http://172.19.136.1:35357/v2.0</a> show<br>
> de2bcbed-f7b6-40cd-89ca-acf6fe2f2d09*<br>
> *<br>
><br>
> +-------------------------------------+-----------------------------------------------------------+<br>
> *<br>
> *| Property | Value<br>
> |*<br>
> *<br>
><br>
> +-------------------------------------+-----------------------------------------------------------+<br>
> *<br>
> *| status | MIGRATING<br>
> |*<br>
> *| updated | 2013-09-02T15:33:54Z<br>
> |*<br>
> *| OS-EXT-STS:task_state | migrating<br>
> |*<br>
> *| OS-EXT-SRV-ATTR:host | acelga<br>
> |*<br>
> *| key_name | None<br>
> |*<br>
> *| image | Ubuntu 12.04.2 LTS<br>
> (1359ca8d-23a2-40e8-940f-d90b3e68bb39) |*<br>
> *| vlan1 network | 172.16.16.175<br>
> |*<br>
> *| hostId |<br>
> 81be94870821e17e327d92e9c80548ffcdd37d24054a235116669f53 |*<br>
> *| OS-EXT-STS:vm_state | active<br>
> |*<br>
> *| OS-EXT-SRV-ATTR:instance_name | instance-00000022<br>
> |*<br>
> *| OS-EXT-SRV-ATTR:hypervisor_hostname | <a href="http://acelga.psi.unc.edu.ar" target="_blank">acelga.psi.unc.edu.ar</a><br>
> |*<br>
> *| flavor | m1.tiny (1)<br>
> |*<br>
> *| id |<br>
> de2bcbed-f7b6-40cd-89ca-acf6fe2f2d09 |*<br>
> *| security_groups | [{u'name': u'default'}]<br>
> |*<br>
> *| user_id | 20390b639d4449c18926dca5e038ec5e<br>
> |*<br>
> *| name | p9<br>
> |*<br>
> *| created | 2013-09-02T15:27:06Z<br>
> |*<br>
> *| tenant_id | d1e3aae242f14c488d2225dcbf1e96d6<br>
> |*<br>
> *| OS-DCF:diskConfig | MANUAL<br>
> |*<br>
> *| metadata | {}<br>
> |*<br>
> *| accessIPv4 |<br>
> |*<br>
> *| accessIPv6 |<br>
> |*<br>
> *| OS-EXT-STS:power_state | 1<br>
> |*<br>
> *| OS-EXT-AZ:availability_zone | nova<br>
> |*<br>
> *| config_drive |<br>
> |*<br>
> *<br>
><br>
> +-------------------------------------+-----------------------------------------------------------+<br>
> *<br>
> *root@cebolla:~/tool#*<br>
><br>
><br>
> Funny fact:<br>
> -The vm still answer ping after migration, so i think this is good.<br>
><br>
> Any ideas about this problem? At first i thought it could be related to a<br>
> connection problem between the nodes, but the VM migrates completly in<br>
> hipervisor level somehow there is some "instance've been migrated ACK"<br>
> missing.<br>
><br>
><br>
> --<br>
> Pavlik Salles Juan Jos?<br>
> -------------- next part --------------<br>
> An HTML attachment was scrubbed...<br>
> URL: <<br>
> <a href="http://lists.openstack.org/pipermail/openstack-operators/attachments/20130902/e8b7058b/attachment-0001.html" target="_blank">http://lists.openstack.org/pipermail/openstack-operators/attachments/20130902/e8b7058b/attachment-0001.html</a><br>
> ><br>
><br>
> ------------------------------<br>
><br>
> Message: 8<br>
> Date: Mon, 2 Sep 2013 19:12:40 +0000<br>
> From: Sebastian Porombka <<a href="mailto:porombka@uni-paderborn.de">porombka@uni-paderborn.de</a>><br>
> To: "<a href="mailto:openstack-operators@lists.openstack.org">openstack-operators@lists.openstack.org</a>"<br>
> <<a href="mailto:openstack-operators@lists.openstack.org">openstack-operators@lists.openstack.org</a>><br>
> Cc: Holger Nitsche <<a href="mailto:hn@uni-paderborn.de">hn@uni-paderborn.de</a>><br>
> Subject: Re: [Openstack-operators] Quantum Security Groups not working<br>
> - iptables rules are not Evaluated<br>
> Message-ID: <<a href="mailto:CE4AAA44.491DD%25porombka@uni-paderborn.de">CE4AAA44.491DD%porombka@uni-paderborn.de</a>><br>
> Content-Type: text/plain; charset="iso-8859-1"<br>
><br>
> Hi<br>
><br>
> Yes, openvswitch-brcompat (as ubuntu package) was installed.<br>
> Uninstalling the package removes the qbr* interfaces in<br>
> 'ovs-vsctl show' and solved the problem. Big thanks to you.<br>
><br>
> Maybe a short sentence in the documentation would be nice. :)<br>
><br>
> Greetings<br>
> Sebastian<br>
><br>
> --<br>
> Sebastian Porombka, M.Sc.<br>
> Zentrum f?r Informations- und Medientechnologien (IMT)<br>
> Universit?t Paderborn<br>
><br>
> E-Mail: <a href="mailto:porombka@uni-paderborn.de">porombka@uni-paderborn.de</a><br>
> Tel.: 05251/60-5999<br>
> Fax: 05251/60-48-5999<br>
> Raum: N5.314<br>
><br>
> --------------------------------------------<br>
> Q: Why is this email five sentences or less?<br>
> A: <a href="http://five.sentenc.es" target="_blank">http://five.sentenc.es</a><br>
><br>
> Please consider the environment before printing this email.<br>
><br>
><br>
><br>
><br>
><br>
> Am 02.09.13 17:28 schrieb "Darragh O'Reilly" unter<br>
> <<a href="mailto:dara2002-openstack@yahoo.com">dara2002-openstack@yahoo.com</a>>:<br>
><br>
> >Hi Lorin,<br>
> ><br>
> >sure, sorry for the beverity. It seems the brcompat is being used because<br>
> >qbr0188455b-25 appears in the 'ovs-vsctl show' output - so it was created<br>
> >as an OVS bridge, but it should have been created as a Linux bridge. I<br>
> >have never used brcompat, but I believe it intercepts calls from brctl<br>
> >and configures OVS bridges instead of Linux bridges. I'm not sure how to<br>
> >uninstall/disable it - it's probably an Operating System package.<br>
> ><br>
> >I don't think any Openstack doc says to install/enable it.<br>
> ><br>
> >Re,<br>
> >Darragh.<br>
> ><br>
> >>________________________________<br>
> >> From: Lorin Hochstein <<a href="mailto:lorin@nimbisservices.com">lorin@nimbisservices.com</a>><br>
> >>To: Darragh O'Reilly <<a href="mailto:dara2002-openstack@yahoo.com">dara2002-openstack@yahoo.com</a>><br>
> >>Cc: Sebastian Porombka <<a href="mailto:porombka@uni-paderborn.de">porombka@uni-paderborn.de</a>>;<br>
> >>"<a href="mailto:openstack-operators@lists.openstack.org">openstack-operators@lists.openstack.org</a>"<br>
> >><<a href="mailto:openstack-operators@lists.openstack.org">openstack-operators@lists.openstack.org</a>><br>
> >>Sent: Monday, 2 September 2013, 16:00<br>
> >>Subject: Re: [Openstack-operators] Quantum Security Groups not working -<br>
> >>iptables rules are not Evaluated<br>
> >><br>
> >><br>
> >><br>
> >>Darragh:<br>
> >><br>
> >><br>
> >>Can you elaborate on this a little more? Do you mean that the "brcompat"<br>
> >>kernel module has been loaded, and this breaks security groups with the<br>
> >>ovs plugin? Should we add something in the documentation about this?<br>
> >><br>
> >><br>
> >>Lorin<br>
> >><br>
> >><br>
> >><br>
> >><br>
> >>Do you mean that the problem is that the ovs-brcompatd service is<br>
> >>running?<br>
> >><br>
> >><br>
> >>openvswitch-brcompat package is installed?<br>
> >><br>
> >><br>
> >><br>
> >>On Mon, Sep 2, 2013 at 10:21 AM, Darragh O'Reilly<br>
> >><<a href="mailto:dara2002-openstack@yahoo.com">dara2002-openstack@yahoo.com</a>> wrote:<br>
> >><br>
> >><br>
> >>>it is not working because you are using the ovs bridge compatibility<br>
> >>>module.<br>
> >>><br>
> >>>Re,<br>
> >>>Darragh.<br>
> >>><br>
> >>>>________________________________<br>
> >>>> From: Sebastian Porombka <<a href="mailto:porombka@uni-paderborn.de">porombka@uni-paderborn.de</a>><br>
> >>>>To: "<a href="mailto:openstack-operators@lists.openstack.org">openstack-operators@lists.openstack.org</a>"<br>
> >>>><<a href="mailto:openstack-operators@lists.openstack.org">openstack-operators@lists.openstack.org</a>><br>
> >>>>Sent: Monday, 2 September 2013, 14:48<br>
> >>>>Subject: [Openstack-operators] Quantum Security Groups not working -<br>
> >>>>iptables rules are not Evaluated<br>
> >>><br>
> >>>><br>
> >>>><br>
> >>>><br>
> >>>>Hi folks.<br>
> >>>><br>
> >>>><br>
> >>>>We're currently on the way to deploy an openstack (grizzly) cloud<br>
> >>>>environment<br>
> >>>>and suffering in problems implementing the security groups like<br>
> >>>>described in [1].<br>
> >>>><br>
> >>>><br>
> >>>>The (hopefully) relevant configuration settings are:<br>
> >>>><br>
> >>>><br>
> >>>>/etc/nova/nova.conf<br>
> >>>>[?]<br>
> >>>>security_group_api=quantum<br>
> >>>>network_api_class=nova.network.quantumv2.api.API<br>
> >>>>libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver<br>
> >>>>firewall_driver=nova.virt.firewall.NoopFirewallDriver<br>
> >>>>[?]<br>
> >>>><br>
> >>>><br>
> >>>>/etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini<br>
> >>>>[?]<br>
> >>>>firewall_driver =<br>
> >>>>quantum.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver<br>
> >>>>[?]<br>
> >>>><br>
> >>>><br>
> >>>>The Networks for the vm's are attached to the compute-nodes via VLAN<br>
> >>>>encapsulation and correctly mapped to the vm's.<br>
> >>>><br>
> >>>><br>
> >>>>From our point of view - we're understanding the need of the<br>
> >>>>"ovs-bridge <> veth glue <> linux-bridge (for filtering) <><br>
> >>>>vm"-construction<br>
> >>>>and observed the single components in our deployment. See [2]<br>
> >>>><br>
> >>>><br>
> >>>>Everything is working except the security groups.<br>
> >>>>We observed that ip-tables rules are generated for the<br>
> >>>>quantum-openvswi-* chains of iptables.<br>
> >>>>And the traffic arriving untagged (native vlan for management) on the<br>
> >>>>machine is processed by iptables but not<br>
> >>>>the traffic which arrived encapsulated.<br>
> >>>><br>
> >>>><br>
> >>>>The traffic which is unpacked by openvswitch and is bridged via the<br>
> >>>>veth and the tap into<br>
> >>>>the machine isn't processed by the iptables rules.<br>
> >>>><br>
> >>>><br>
> >>>>We have no remaining clue/idea how to solve this issue? :(<br>
> >>>><br>
> >>>><br>
> >>>>Greetings<br>
> >>>> Sebastian<br>
> >>>><br>
> >>>><br>
> >>>>[1]<br>
> >>>><br>
> <a href="http://docs.openstack.org/trunk/openstack-network/admin/content/under_t" target="_blank">http://docs.openstack.org/trunk/openstack-network/admin/content/under_t</a><br>
> >>>>he_hood_openvswitch.html<br>
> >>>>[2] <a href="http://pastebin.com/WXMH6y4A" target="_blank">http://pastebin.com/WXMH6y4A</a><br>
> >>>><br>
> >>>><br>
> >>>>--<br>
> >>>>Sebastian Porombka, M.Sc.<br>
> >>>>Zentrum f?r Informations- und Medientechnologien (IMT)<br>
> >>>>Universit?t Paderborn<br>
> >>>><br>
> >>>><br>
> >>>>E-Mail: <a href="mailto:porombka@uni-paderborn.de">porombka@uni-paderborn.de</a><br>
> >>>>Tel.: 05251/60-5999<br>
> >>>>Fax: 05251/60-48-5999<br>
> >>>>Raum: N5.314<br>
> >>>><br>
> >>>><br>
> >>>>--------------------------------------------<br>
> >>>>Q: Why is this email five sentences or less?<br>
> >>>>A: <a href="http://five.sentenc.es" target="_blank">http://five.sentenc.es</a><br>
> >>>><br>
> >>>><br>
> >>>>Please consider the environment before printing this email.<br>
> >>>>_______________________________________________<br>
> >>>>OpenStack-operators mailing list<br>
> >>>><a href="mailto:OpenStack-operators@lists.openstack.org">OpenStack-operators@lists.openstack.org</a><br>
> >>>><br>
> <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators</a><br>
> >>>><br>
> >>>><br>
> >>>><br>
> >>><br>
> >>>_______________________________________________<br>
> >>>OpenStack-operators mailing list<br>
> >>><a href="mailto:OpenStack-operators@lists.openstack.org">OpenStack-operators@lists.openstack.org</a><br>
> >>><a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators</a><br>
> >>><br>
> >><br>
> >><br>
> >><br>
> >>--<br>
> >><br>
> >>Lorin Hochstein<br>
> >><br>
> >>Lead Architect - Cloud Services<br>
> >>Nimbis Services, Inc.<br>
> >><a href="http://www.nimbisservices.com" target="_blank">www.nimbisservices.com</a><br>
> >><br>
> >><br>
> -------------- next part --------------<br>
> A non-text attachment was scrubbed...<br>
> Name: smime.p7s<br>
> Type: application/pkcs7-signature<br>
> Size: 5443 bytes<br>
> Desc: not available<br>
> URL: <<br>
> <a href="http://lists.openstack.org/pipermail/openstack-operators/attachments/20130902/30940657/attachment-0001.bin" target="_blank">http://lists.openstack.org/pipermail/openstack-operators/attachments/20130902/30940657/attachment-0001.bin</a><br>
> ><br>
><br>
> ------------------------------<br>
><br>
> Message: 9<br>
> Date: Mon, 2 Sep 2013 21:44:09 -0300<br>
> From: Juan Jos? Pavlik Salles <<a href="mailto:jjpavlik@gmail.com">jjpavlik@gmail.com</a>><br>
> To: "<a href="mailto:openstack-operators@lists.openstack.org">openstack-operators@lists.openstack.org</a>"<br>
> <<a href="mailto:openstack-operators@lists.openstack.org">openstack-operators@lists.openstack.org</a>><br>
> Subject: Re: [Openstack-operators] Migrating instances in grizzly<br>
> Message-ID:<br>
> <<br>
> <a href="mailto:CAKCETkfuguB_AAdY9iYfddO65hNF-jVvhiGu62XFu3XhvaoBuQ@mail.gmail.com">CAKCETkfuguB_AAdY9iYfddO65hNF-jVvhiGu62XFu3XhvaoBuQ@mail.gmail.com</a>><br>
> Content-Type: text/plain; charset="iso-8859-1"<br>
><br>
> I've also found this in nova-conductor.log:<br>
><br>
> 2013-09-02 15:35:27.208 DEBUG nova.openstack.common.rpc.common<br>
> [req-e0473533-89af-4ff5-b6fa-4b0b6eb50a6d 31020076174943bdb7486c330a298d93<br>
> d1e3aae242f14c488d2225dc<br>
> bf1e96d6] Timed out waiting for RPC response: timed out _error_callback<br>
><br>
> /usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/impl_kombu.py:628<br>
> 2013-09-02 15:35:27.222 ERROR nova.openstack.common.rpc.amqp<br>
> [req-e0473533-89af-4ff5-b6fa-4b0b6eb50a6d 31020076174943bdb7486c330a298d93<br>
> d1e3aae242f14c488d2225dcbf<br>
> 1e96d6] Exception during message handling<br>
> 2013-09-02 15:35:27.222 1363 TRACE nova.openstack.common.rpc.amqp Traceback<br>
> (most recent call last):<br>
> 2013-09-02 15:35:27.222 1363 TRACE nova.openstack.common.rpc.amqp File<br>
> "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py", line<br>
> 430, in _proce<br>
> ss_data<br>
> 2013-09-02 15:35:27.222 1363 TRACE nova.openstack.common.rpc.amqp rval<br>
> = self.proxy.dispatch(ctxt, version, method, **args)<br>
> 2013-09-02 15:35:27.222 1363 TRACE nova.openstack.common.rpc.amqp File<br>
> "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/dispatcher.py",<br>
> line 133, in<br>
> dispatch<br>
> 2013-09-02 15:35:27.222 1363 TRACE nova.openstack.common.rpc.amqp<br>
> return getattr(proxyobj, method)(ctxt, **kwargs)<br>
> 2013-09-02 15:35:27.222 1363 TRACE nova.openstack.common.rpc.amqp File<br>
> "/usr/lib/python2.7/dist-packages/nova/conductor/manager.py", line 399, in<br>
> network_migrat<br>
> e_instance_start<br>
> 2013-09-02 15:35:27.222 1363 TRACE nova.openstack.common.rpc.amqp<br>
> self.network_api.migrate_instance_start(context, instance, migration)<br>
> 2013-09-02 15:35:27.222 1363 TRACE nova.openstack.common.rpc.amqp File<br>
> "/usr/lib/python2.7/dist-packages/nova/network/api.py", line 89, in wrapped<br>
> 2013-09-02 15:35:27.222 1363 TRACE nova.openstack.common.rpc.amqp<br>
> return func(self, context, *args, **kwargs)<br>
> 2013-09-02 15:35:27.222 1363 TRACE nova.openstack.common.rpc.amqp File<br>
> "/usr/lib/python2.7/dist-packages/nova/network/api.py", line 501, in<br>
> migrate_instance_sta<br>
> rt<br>
> 2013-09-02 15:35:27.222 1363 TRACE nova.openstack.common.rpc.amqp<br>
> self.network_rpcapi.migrate_instance_start(context, **args)<br>
> 2013-09-02 15:35:27.222 1363 TRACE nova.openstack.common.rpc.amqp File<br>
> "/usr/lib/python2.7/dist-packages/nova/network/rpcapi.py", line 333, in<br>
> migrate_instance_<br>
> start<br>
> 2013-09-02 15:35:27.222 1363 TRACE nova.openstack.common.rpc.amqp<br>
> version='1.2')<br>
> 2013-09-02 15:35:27.222 1363 TRACE nova.openstack.common.rpc.amqp File<br>
> "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/proxy.py", line<br>
> 80, in call<br>
> 2013-09-02 15:35:27.222 1363 TRACE nova.openstack.common.rpc.amqp<br>
> return rpc.call(context, self._get_topic(topic), msg, timeout)<br>
> 2013-09-02 15:35:27.222 1363 TRACE nova.openstack.common.rpc.amqp File<br>
> "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/__init__.py",<br>
> line 140, in ca<br>
> ll<br>
> 2013-09-02 15:35:27.222 1363 TRACE nova.openstack.common.rpc.amqp<br>
> return _get_impl().call(CONF, context, topic, msg, timeout)<br>
> 2013-09-02 15:35:27.222 1363 TRACE nova.openstack.common.rpc.amqp File<br>
> "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/impl_kombu.py",<br>
> line 798, in<br>
> call<br>
> 2013-09-02 15:35:27.222 1363 TRACE nova.openstack.common.rpc.amqp<br>
> rpc_amqp.get_connection_pool(conf, Connection))<br>
> 2013-09-02 15:35:27.222 1363 TRACE nova.openstack.common.rpc.amqp File<br>
> "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py", line<br>
> 612, in call<br>
> 2013-09-02 15:35:27.222 1363 TRACE nova.openstack.common.rpc.amqp rv =<br>
> list(rv)<br>
> 2013-09-02 15:35:27.222 1363 TRACE nova.openstack.common.rpc.amqp File<br>
> "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py", line<br>
> 554, in __iter__<br>
> 2013-09-02 15:35:27.222 1363 TRACE nova.openstack.common.rpc.amqp<br>
> self.done()<br>
> 2013-09-02 15:35:27.222 1363 TRACE nova.openstack.common.rpc.amqp File<br>
> "/usr/lib/python2.7/contextlib.py", line 24, in __exit__<br>
> 2013-09-02 15:35:27.222 1363 TRACE nova.openstack.common.rpc.amqp<br>
> self.gen.next()<br>
> 2013-09-02 15:35:27.222 1363 TRACE nova.openstack.common.rpc.amqp File<br>
> "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py", line<br>
> 551, in __iter__<br>
> 2013-09-02 15:35:27.222 1363 TRACE nova.openstack.common.rpc.amqp<br>
> self._iterator.next()<br>
> 2013-09-02 15:35:27.222 1363 TRACE nova.openstack.common.rpc.amqp File<br>
> "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/impl_kombu.py",<br>
> line 648, in iterconsume<br>
> 2013-09-02 15:35:27.222 1363 TRACE nova.openstack.common.rpc.amqp yield<br>
> self.ensure(_error_callback, _consume)<br>
> 2013-09-02 15:35:27.222 1363 TRACE nova.openstack.common.rpc.amqp File<br>
> "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/impl_kombu.py",<br>
> line 566, in ensure<br>
> 2013-09-02 15:35:27.222 1363 TRACE nova.openstack.common.rpc.amqp<br>
> error_callback(e)<br>
> 2013-09-02 15:35:27.222 1363 TRACE nova.openstack.common.rpc.amqp File<br>
> "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/impl_kombu.py",<br>
> line 629, in _error_callback<br>
> 2013-09-02 15:35:27.222 1363 TRACE nova.openstack.common.rpc.amqp raise<br>
> rpc_common.Timeout()<br>
> 2013-09-02 15:35:27.222 1363 TRACE nova.openstack.common.rpc.amqp Timeout:<br>
> Timeout while waiting on RPC response.<br>
> 2013-09-02 15:35:27.222 1363 TRACE nova.openstack.common.rpc.amqp<br>
> 2013-09-02 15:35:27.237 ERROR nova.openstack.common.rpc.common<br>
> [req-e0473533-89af-4ff5-b6fa-4b0b6eb50a6d 31020076174943bdb7486c330a298d93<br>
> d1e3aae242f14c488d2225dcbf1e96d6] Returning exception Timeout while waiting<br>
> on RPC response. to caller<br>
><br>
> Does anybody know all the steps that take to live-migrate an instance ?? It<br>
> seems to be stopping inside the network_migrate_instance_start function,<br>
> really no clue at all...<br>
><br>
><br>
> 2013/9/2 Juan Jos? Pavlik Salles <<a href="mailto:jjpavlik@gmail.com">jjpavlik@gmail.com</a>><br>
><br>
> > Hi guys, last friday i started testing live-migration in my grizzly cloud<br>
> > with shared storage (gfs2) but i run into a problem, a little weird:<br>
> ><br>
> > This is the status before migrating:<br>
> ><br>
> > -I've p9 instances also called instance-00000022 running on "acelga"<br>
> > compute node.<br>
> ><br>
> > *root@acelga:~/tools# virsh list*<br>
> > * Id Name State*<br>
> > *----------------------------------------------------*<br>
> > * 6 instance-00000022 running*<br>
> > *<br>
> > *<br>
> > *root@acelga:~/tools# *<br>
> > *<br>
> > *<br>
> > *<br>
> > *<br>
> > *root@cebolla:~/tool# virsh list*<br>
> > * Id Nombre Estado*<br>
> > *----------------------------------------------------*<br>
> > *<br>
> > *<br>
> > *root@cebolla:~/tool# *<br>
> ><br>
> > -Here you can see all the info about the instance<br>
> ><br>
> > *root@cebolla:~/tool# nova --os-username=noc-admin --os-tenant-name=noc<br>
> > --os-password=XXXXXXX --os-auth-url <a href="http://172.19.136.1:35357/v2.0" target="_blank">http://172.19.136.1:35357/v2.0</a> show<br>
> > de2bcbed-f7b6-40cd-89ca-acf6fe2f2d09*<br>
> > *<br>
> ><br>
> +-------------------------------------+-----------------------------------------------------------+<br>
> > *<br>
> > *| Property | Value<br>
> > |*<br>
> > *<br>
> ><br>
> +-------------------------------------+-----------------------------------------------------------+<br>
> > *<br>
> > *| status | ACTIVE<br>
> > |*<br>
> > *| updated | 2013-09-02T15:27:39Z<br>
> > |*<br>
> > *| OS-EXT-STS:task_state | None<br>
> > |*<br>
> > *| OS-EXT-SRV-ATTR:host | acelga<br>
> > |*<br>
> > *| key_name | None<br>
> > |*<br>
> > *| image | Ubuntu 12.04.2 LTS<br>
> > (1359ca8d-23a2-40e8-940f-d90b3e68bb39) |*<br>
> > *| vlan1 network | 172.16.16.175<br>
> > |*<br>
> > *| hostId |<br>
> > 81be94870821e17e327d92e9c80548ffcdd37d24054a235116669f53 |*<br>
> > *| OS-EXT-STS:vm_state | active<br>
> > |*<br>
> > *| OS-EXT-SRV-ATTR:instance_name | instance-00000022<br>
> > |*<br>
> > *| OS-EXT-SRV-ATTR:hypervisor_hostname | <a href="http://acelga.psi.unc.edu.ar" target="_blank">acelga.psi.unc.edu.ar</a><br>
> > |*<br>
> > *| flavor | m1.tiny (1)<br>
> > |*<br>
> > *| id |<br>
> > de2bcbed-f7b6-40cd-89ca-acf6fe2f2d09 |*<br>
> > *| security_groups | [{u'name': u'default'}]<br>
> > |*<br>
> > *| user_id | 20390b639d4449c18926dca5e038ec5e<br>
> > |*<br>
> > *| name | p9<br>
> > |*<br>
> > *| created | 2013-09-02T15:27:06Z<br>
> > |*<br>
> > *| tenant_id | d1e3aae242f14c488d2225dcbf1e96d6<br>
> > |*<br>
> > *| OS-DCF:diskConfig | MANUAL<br>
> > |*<br>
> > *| metadata | {}<br>
> > |*<br>
> > *| accessIPv4 |<br>
> > |*<br>
> > *| accessIPv6 |<br>
> > |*<br>
> > *| progress | 0<br>
> > |*<br>
> > *| OS-EXT-STS:power_state | 1<br>
> > |*<br>
> > *| OS-EXT-AZ:availability_zone | nova<br>
> > |*<br>
> > *| config_drive |<br>
> > |*<br>
> > *<br>
> ><br>
> +-------------------------------------+-----------------------------------------------------------+<br>
> > *<br>
> > *root@cebolla:~/tool#*<br>
> ><br>
> > -So i try to move it to the other node "cebolla"<br>
> ><br>
> > *root@acelga:~/tools# nova --os-username=noc-admin --os-tenant-name=noc<br>
> > --os-password=HjZ5V9yj --os-auth-url<br>
> <a href="http://172.19.136.1:35357/v2.0live-migrationde2bcbed-f7b6-40cd-89ca-acf6fe2f2d09" target="_blank">http://172.19.136.1:35357/v2.0live-migrationde2bcbed-f7b6-40cd-89ca-acf6fe2f2d09</a> cebolla<br>
> > *<br>
> > *root@acelga:~/tools# virsh list*<br>
> > * Id Name State*<br>
> > *----------------------------------------------------*<br>
> > *<br>
> > *<br>
> > *root@acelga:~/tools#*<br>
> ><br>
> > No error messages at all on "acelga" compute node so far. If i check the<br>
> > other node i can see the instance've been migrated<br>
> ><br>
> > *root@cebolla:~/tool# virsh list*<br>
> > * Id Nombre Estado*<br>
> > *----------------------------------------------------*<br>
> > * 11 instance-00000022 ejecutando*<br>
> > *<br>
> > *<br>
> > *root@cebolla:~/tool#*<br>
> ><br>
> ><br>
> > -BUT... after a few seconds i get this on "acelga"'s nova-compute.log<br>
> ><br>
> ><br>
> > *2013-09-02 15:35:45.784 4601 DEBUG nova.openstack.common.rpc.common [-]<br>
> > Timed out waiting for RPC response: timed out _error_callback<br>
> ><br>
> /usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/impl_kombu.py:628<br>
> > *<br>
> > *2013-09-02 15:35:45.790 4601 ERROR nova.utils [-] in fixed duration<br>
> > looping call*<br>
> > *2013-09-02 15:35:45.790 4601 TRACE nova.utils Traceback (most recent<br>
> > call last):*<br>
> > *2013-09-02 15:35:45.790 4601 TRACE nova.utils File<br>
> > "/usr/lib/python2.7/dist-packages/nova/utils.py", line 594, in _inner*<br>
> > *2013-09-02 15:35:45.790 4601 TRACE nova.utils self.f(*self.args, **<br>
> > <a href="http://self.kw" target="_blank">self.kw</a>)*<br>
> > *2013-09-02 15:35:45.790 4601 TRACE nova.utils File<br>
> > "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line<br>
> 3129,<br>
> > in wait_for_live_migration*<br>
> > *2013-09-02 15:35:45.790 4601 TRACE nova.utils migrate_data)*<br>
> > *2013-09-02 15:35:45.790 4601 TRACE nova.utils File<br>
> > "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 3208, in<br>
> > _post_live_migration*<br>
> > *2013-09-02 15:35:45.790 4601 TRACE nova.utils migration)*<br>
> > *2013-09-02 15:35:45.790 4601 TRACE nova.utils File<br>
> > "/usr/lib/python2.7/dist-packages/nova/conductor/api.py", line 664, in<br>
> > network_migrate_instance_start*<br>
> > *2013-09-02 15:35:45.790 4601 TRACE nova.utils migration)*<br>
> > *2013-09-02 15:35:45.790 4601 TRACE nova.utils File<br>
> > "/usr/lib/python2.7/dist-packages/nova/conductor/rpcapi.py", line 415, in<br>
> > network_migrate_instance_start*<br>
> > *2013-09-02 15:35:45.790 4601 TRACE nova.utils return<br>
> > self.call(context, msg, version='1.41')*<br>
> > *2013-09-02 15:35:45.790 4601 TRACE nova.utils File<br>
> > "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/proxy.py",<br>
> line<br>
> > 80, in call*<br>
> > *2013-09-02 15:35:45.790 4601 TRACE nova.utils return<br>
> > rpc.call(context, self._get_topic(topic), msg, timeout)*<br>
> > *2013-09-02 15:35:45.790 4601 TRACE nova.utils File<br>
> > "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/__init__.py",<br>
> > line 140, in call*<br>
> > *2013-09-02 15:35:45.790 4601 TRACE nova.utils return<br>
> > _get_impl().call(CONF, context, topic, msg, timeout)*<br>
> > *2013-09-02 15:35:45.790 4601 TRACE nova.utils File<br>
> ><br>
> "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/impl_kombu.py",<br>
> > line 798, in call*<br>
> > *2013-09-02 15:35:45.790 4601 TRACE nova.utils<br>
> > rpc_amqp.get_connection_pool(conf, Connection))*<br>
> > *2013-09-02 15:35:45.790 4601 TRACE nova.utils File<br>
> > "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py",<br>
> line<br>
> > 612, in call*<br>
> > *2013-09-02 15:35:45.790 4601 TRACE nova.utils rv = list(rv)*<br>
> > *2013-09-02 15:35:45.790 4601 TRACE nova.utils File<br>
> > "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py",<br>
> line<br>
> > 554, in __iter__*<br>
> > *2013-09-02 15:35:45.790 4601 TRACE nova.utils self.done()*<br>
> > *2013-09-02 15:35:45.790 4601 TRACE nova.utils File<br>
> > "/usr/lib/python2.7/contextlib.py", line 24, in __exit__*<br>
> > *2013-09-02 15:35:45.790 4601 TRACE nova.utils self.gen.next()*<br>
> > *2013-09-02 15:35:45.790 4601 TRACE nova.utils File<br>
> > "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py",<br>
> line<br>
> > 551, in __iter__*<br>
> > *2013-09-02 15:35:45.790 4601 TRACE nova.utils self._iterator.next()*<br>
> > *2013-09-02 15:35:45.790 4601 TRACE nova.utils File<br>
> ><br>
> "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/impl_kombu.py",<br>
> > line 648, in iterconsume*<br>
> > *2013-09-02 15:35:45.790 4601 TRACE nova.utils yield<br>
> > self.ensure(_error_callback, _consume)*<br>
> > *2013-09-02 15:35:45.790 4601 TRACE nova.utils File<br>
> ><br>
> "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/impl_kombu.py",<br>
> > line 566, in ensure*<br>
> > *2013-09-02 15:35:45.790 4601 TRACE nova.utils error_callback(e)*<br>
> > *2013-09-02 15:35:45.790 4601 TRACE nova.utils File<br>
> ><br>
> "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/impl_kombu.py",<br>
> > line 629, in _error_callback*<br>
> > *2013-09-02 15:35:45.790 4601 TRACE nova.utils raise<br>
> > rpc_common.Timeout()*<br>
> > *2013-09-02 15:35:45.790 4601 TRACE nova.utils Timeout: Timeout while<br>
> > waiting on RPC response.*<br>
> > *2013-09-02 15:35:45.790 4601 TRACE nova.utils*<br>
> ><br>
> ><br>
> > -And the VM state never changes back to ACTIVE from MIGRATING:<br>
> ><br>
> ><br>
> > *root@cebolla:~/tool# nova --os-username=noc-admin --os-tenant-name=noc<br>
> > --os-password=XXXXX --os-auth-url <a href="http://172.19.136.1:35357/v2.0" target="_blank">http://172.19.136.1:35357/v2.0</a> show<br>
> > de2bcbed-f7b6-40cd-89ca-acf6fe2f2d09*<br>
> > *<br>
> ><br>
> +-------------------------------------+-----------------------------------------------------------+<br>
> > *<br>
> > *| Property | Value<br>
> > |*<br>
> > *<br>
> ><br>
> +-------------------------------------+-----------------------------------------------------------+<br>
> > *<br>
> > *| status | MIGRATING<br>
> > |*<br>
> > *| updated | 2013-09-02T15:33:54Z<br>
> > |*<br>
> > *| OS-EXT-STS:task_state | migrating<br>
> > |*<br>
> > *| OS-EXT-SRV-ATTR:host | acelga<br>
> > |*<br>
> > *| key_name | None<br>
> > |*<br>
> > *| image | Ubuntu 12.04.2 LTS<br>
> > (1359ca8d-23a2-40e8-940f-d90b3e68bb39) |*<br>
> > *| vlan1 network | 172.16.16.175<br>
> > |*<br>
> > *| hostId |<br>
> > 81be94870821e17e327d92e9c80548ffcdd37d24054a235116669f53 |*<br>
> > *| OS-EXT-STS:vm_state | active<br>
> > |*<br>
> > *| OS-EXT-SRV-ATTR:instance_name | instance-00000022<br>
> > |*<br>
> > *| OS-EXT-SRV-ATTR:hypervisor_hostname | <a href="http://acelga.psi.unc.edu.ar" target="_blank">acelga.psi.unc.edu.ar</a><br>
> > |*<br>
> > *| flavor | m1.tiny (1)<br>
> > |*<br>
> > *| id |<br>
> > de2bcbed-f7b6-40cd-89ca-acf6fe2f2d09 |*<br>
> > *| security_groups | [{u'name': u'default'}]<br>
> > |*<br>
> > *| user_id | 20390b639d4449c18926dca5e038ec5e<br>
> > |*<br>
> > *| name | p9<br>
> > |*<br>
> > *| created | 2013-09-02T15:27:06Z<br>
> > |*<br>
> > *| tenant_id | d1e3aae242f14c488d2225dcbf1e96d6<br>
> > |*<br>
> > *| OS-DCF:diskConfig | MANUAL<br>
> > |*<br>
> > *| metadata | {}<br>
> > |*<br>
> > *| accessIPv4 |<br>
> > |*<br>
> > *| accessIPv6 |<br>
> > |*<br>
> > *| OS-EXT-STS:power_state | 1<br>
> > |*<br>
> > *| OS-EXT-AZ:availability_zone | nova<br>
> > |*<br>
> > *| config_drive |<br>
> > |*<br>
> > *<br>
> ><br>
> +-------------------------------------+-----------------------------------------------------------+<br>
> > *<br>
> > *root@cebolla:~/tool#*<br>
> ><br>
> ><br>
> > Funny fact:<br>
> > -The vm still answer ping after migration, so i think this is good.<br>
> ><br>
> > Any ideas about this problem? At first i thought it could be related to a<br>
> > connection problem between the nodes, but the VM migrates completly in<br>
> > hipervisor level somehow there is some "instance've been migrated ACK"<br>
> > missing.<br>
> ><br>
> ><br>
> > --<br>
> > Pavlik Salles Juan Jos?<br>
> ><br>
><br>
><br>
><br>
> --<br>
> Pavlik Salles Juan Jos?<br>
> -------------- next part --------------<br>
> An HTML attachment was scrubbed...<br>
> URL: <<br>
> <a href="http://lists.openstack.org/pipermail/openstack-operators/attachments/20130902/d8ca74f2/attachment.html" target="_blank">http://lists.openstack.org/pipermail/openstack-operators/attachments/20130902/d8ca74f2/attachment.html</a><br>
> ><br>
><br>
> ------------------------------<br>
><br>
> _______________________________________________<br>
> OpenStack-operators mailing list<br>
> <a href="mailto:OpenStack-operators@lists.openstack.org">OpenStack-operators@lists.openstack.org</a><br>
> <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators</a><br>
><br>
><br>
> End of OpenStack-operators Digest, Vol 35, Issue 1<br>
> **************************************************<br>
><br>
-------------- next part --------------<br>
An HTML attachment was scrubbed...<br>
URL: <<a href="http://lists.openstack.org/pipermail/openstack-operators/attachments/20130903/1495e47f/attachment-0001.html" target="_blank">http://lists.openstack.org/pipermail/openstack-operators/attachments/20130903/1495e47f/attachment-0001.html</a>><br>
<br>
------------------------------<br>
<br>
Message: 2<br>
Date: Mon, 2 Sep 2013 23:24:19 -0400<br>
From: Lorin Hochstein <<a href="mailto:lorin@nimbisservices.com">lorin@nimbisservices.com</a>><br>
To: yungho <<a href="mailto:yungho5054@gmail.com">yungho5054@gmail.com</a>><br>
Cc: "<a href="mailto:openstack-operators@lists.openstack.org">openstack-operators@lists.openstack.org</a>"<br>
<<a href="mailto:openstack-operators@lists.openstack.org">openstack-operators@lists.openstack.org</a>><br>
Subject: Re: [Openstack-operators] OpenStack-operators Digest, Vol 35,<br>
Issue 1<br>
Message-ID:<br>
<<a href="mailto:CADzpNMUDv9zGMDYixrRi0J6HMPam136Uxqqq96snr7EtKp92xA@mail.gmail.com">CADzpNMUDv9zGMDYixrRi0J6HMPam136Uxqqq96snr7EtKp92xA@mail.gmail.com</a>><br>
Content-Type: text/plain; charset="iso-8859-1"<br>
<br>
Youngho:<br>
<br>
On Mon, Sep 2, 2013 at 10:51 PM, yungho <<a href="mailto:yungho5054@gmail.com">yungho5054@gmail.com</a>> wrote:<br>
<br>
> Hello,<br>
> I CentOs6.4 environment to deploy OpenStack G version, Controller<br>
> Node, Network Node, Compute Node. Networks using Quantum, but is still on<br>
> the Quantum network model do not know, and want to know under what<br>
> circumstances the use of gre, under what circumstances the vlan mode.<br>
><br>
><br>
Do you have administrative privileges on the networking switch that<br>
connects your compute hosts together? If so, I would recommend using vlan<br>
mode. You will need to configure your switch appropriately.<br>
<br>
If you aren't able to modify the settings on your switch, or if your nodes<br>
are not all connected to the same L2 network, then you will need to use GRE<br>
tunnels.<br>
<br>
I believe that GRE will work for pretty much all scenarios (since it just<br>
requires IP connectivity), but I think it introduces some overhead.<br>
However, I have no first-hand experience with it.<br>
<br>
Take care,<br>
<br>
Lorin<br>
--<br>
Lorin Hochstein<br>
Lead Architect - Cloud Services<br>
Nimbis Services, Inc.<br>
<a href="http://www.nimbisservices.com" target="_blank">www.nimbisservices.com</a><br>
-------------- next part --------------<br>
An HTML attachment was scrubbed...<br>
URL: <<a href="http://lists.openstack.org/pipermail/openstack-operators/attachments/20130902/e680b107/attachment-0001.html" target="_blank">http://lists.openstack.org/pipermail/openstack-operators/attachments/20130902/e680b107/attachment-0001.html</a>><br>
<br>
------------------------------<br>
<br>
Message: 3<br>
Date: Tue, 3 Sep 2013 10:18:39 +0200<br>
From: Simon Pasquier <<a href="mailto:simon.pasquier@bull.net">simon.pasquier@bull.net</a>><br>
To: <<a href="mailto:openstack-operators@lists.openstack.org">openstack-operators@lists.openstack.org</a>><br>
Subject: Re: [Openstack-operators] Running DHCP agent in HA (grizzly)<br>
Message-ID: <<a href="mailto:52259B5F.8050407@bull.net">52259B5F.8050407@bull.net</a>><br>
Content-Type: text/plain; charset="ISO-8859-1"; format=flowed<br>
<br>
Re-transfer to the list as it was sent to myself only<br>
<br>
<br>
-------- Message original --------<br>
Sujet: Re: [Openstack-operators] Running DHCP agent in HA (grizzly)<br>
Date : Mon, 2 Sep 2013 11:02:53 -0500<br>
De : David Wittman <<a href="mailto:dwittman@gmail.com">dwittman@gmail.com</a>><br>
Pour : Simon Pasquier <<a href="mailto:simon.pasquier@bull.net">simon.pasquier@bull.net</a>><br>
<br>
<br>
<br>
Robert,<br>
<br>
As Simon mentioned, this isn't a feature in Grizzly yet. For now, you<br>
can achieve similar functionality by calling the quantum-ha-tool from<br>
stackforge[1] at a regular interval. More specifically, you're looking<br>
for the `--replicate-dhcp` option.<br>
<br>
[1]:<br>
<a href="https://github.com/stackforge/cookbook-openstack-network/blob/master/files/default/quantum-ha-tool.py" target="_blank">https://github.com/stackforge/cookbook-openstack-network/blob/master/files/default/quantum-ha-tool.py</a><br>
<br>
Dave<br>
<br>
<br>
On Mon, Sep 2, 2013 at 10:09 AM, Simon Pasquier <<a href="mailto:simon.pasquier@bull.net">simon.pasquier@bull.net</a><br>
<mailto:<a href="mailto:simon.pasquier@bull.net">simon.pasquier@bull.net</a>>> wrote:<br>
<br>
Hello,<br>
I guess you are running Grizzly. It requires Havana to be able to<br>
schedule automatically multiple DHCP agents per network.<br>
Simon<br>
<br>
Le 02/09/2013 16:41, Robert van Leeuwen a ?crit :<br>
<br>
Hi,<br>
<br>
How would one make sure multiple dhcp agents run for the same<br>
segment?<br>
I currently have 2 dhcp agents running but only one is added to<br>
a network when the network is created.<br>
<br>
When I manually add the other one with "quantum<br>
dhcp-agent-network-add" but I would like this to happen<br>
automatically.<br>
<br>
Thx,<br>
Robert<br>
<br>
<br>
_________________________________________________<br>
OpenStack-operators mailing list<br>
OpenStack-operators@lists.__<a href="http://openstack.org" target="_blank">openstack.org</a><br>
<mailto:<a href="mailto:OpenStack-operators@lists.openstack.org">OpenStack-operators@lists.openstack.org</a>><br>
<br>
<a href="http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-operators" target="_blank">http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-operators</a><br>
<br>
<<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators</a>><br>
<br>
<br>
<br>
--<br>
Simon Pasquier<br>
Software Engineer<br>
Bull, Architect of an Open World<br>
Phone: + 33 4 76 29 71 49 <tel:%2B%2033%204%2076%2029%2071%2049><br>
<a href="http://www.bull.com" target="_blank">http://www.bull.com</a><br>
<br>
<br>
_________________________________________________<br>
OpenStack-operators mailing list<br>
OpenStack-operators@lists.__<a href="http://openstack.org" target="_blank">openstack.org</a><br>
<mailto:<a href="mailto:OpenStack-operators@lists.openstack.org">OpenStack-operators@lists.openstack.org</a>><br>
<br>
<a href="http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-operators" target="_blank">http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-operators</a><br>
<br>
<<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators</a>><br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
------------------------------<br>
<br>
_______________________________________________<br>
OpenStack-operators mailing list<br>
<a href="mailto:OpenStack-operators@lists.openstack.org">OpenStack-operators@lists.openstack.org</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators</a><br>
<br>
<br>
End of OpenStack-operators Digest, Vol 35, Issue 2<br>
**************************************************<br>
</blockquote></div><br></div>