[Openstack-operators] OpenStack-operators Digest, Vol 35, Issue 2

yungho yungho5054 at gmail.com
Tue Sep 3 13:10:04 UTC 2013


Can anyone explain the following OpenStack in domain, group, user is the
relationship between what。


On Tue, Sep 3, 2013 at 4:18 PM, <
openstack-operators-request at lists.openstack.org> wrote:

> Send OpenStack-operators mailing list submissions to
>         openstack-operators at lists.openstack.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
> or, via email, send a message with subject or body 'help' to
>         openstack-operators-request at lists.openstack.org
>
> You can reach the person managing the list at
>         openstack-operators-owner at lists.openstack.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of OpenStack-operators digest..."
>
>
> Today's Topics:
>
>    1. Re: OpenStack-operators Digest, Vol 35,   Issue 1 (yungho)
>    2. Re: OpenStack-operators Digest, Vol 35,   Issue 1 (Lorin Hochstein)
>    3. Re: Running DHCP agent in HA (grizzly) (Simon Pasquier)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Tue, 3 Sep 2013 10:51:57 +0800
> From: yungho <yungho5054 at gmail.com>
> To: openstack-operators at lists.openstack.org
> Subject: Re: [Openstack-operators] OpenStack-operators Digest, Vol 35,
>         Issue 1
> Message-ID:
>         <
> CAHeoptThFLv79dkKHnRdV3bizxEHn5YqW7U6DxPUaND+zvjsog at mail.gmail.com>
> Content-Type: text/plain; charset="iso-8859-1"
>
> Hello,
>        I CentOs6.4 environment to deploy OpenStack G version, Controller
> Node, Network Node, Compute Node. Networks using Qauntum, but is still on
> the Quantum network model do not know, and want to know under what
> circumstances the use of gre, under what circumstances the vlan mode.
>
>
> On Tue, Sep 3, 2013 at 8:44 AM, <
> openstack-operators-request at lists.openstack.org> wrote:
>
> > Send OpenStack-operators mailing list submissions to
> >         openstack-operators at lists.openstack.org
> >
> > To subscribe or unsubscribe via the World Wide Web, visit
> >
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> >
> > or, via email, send a message with subject or body 'help' to
> >         openstack-operators-request at lists.openstack.org
> >
> > You can reach the person managing the list at
> >         openstack-operators-owner at lists.openstack.org
> >
> > When replying, please edit your Subject line so it is more specific
> > than "Re: Contents of OpenStack-operators digest..."
> >
> >
> > Today's Topics:
> >
> >    1. Quantum Security Groups not working - iptables rules are not
> >       Evaluated (Sebastian Porombka)
> >    2. Re: Quantum Security Groups not working - iptables rules are
> >       not Evaluated (Darragh O'Reilly)
> >    3. Running DHCP agent in HA (grizzly) (Robert van Leeuwen)
> >    4. Re: Quantum Security Groups not working - iptables rules are
> >       not Evaluated (Lorin Hochstein)
> >    5. Re: Running DHCP agent in HA (grizzly) (Simon Pasquier)
> >    6. Re: Quantum Security Groups not working - iptables rules are
> >       not Evaluated (Darragh O'Reilly)
> >    7. Migrating instances in grizzly (Juan Jos? Pavlik Salles)
> >    8. Re: Quantum Security Groups not working - iptables rules are
> >       not Evaluated (Sebastian Porombka)
> >    9. Re: Migrating instances in grizzly (Juan Jos? Pavlik Salles)
> >
> >
> > ----------------------------------------------------------------------
> >
> > Message: 1
> > Date: Mon, 2 Sep 2013 13:48:08 +0000
> > From: Sebastian Porombka <porombka at uni-paderborn.de>
> > To: "openstack-operators at lists.openstack.org"
> >         <openstack-operators at lists.openstack.org>
> > Subject: [Openstack-operators] Quantum Security Groups not working -
> >         iptables rules are not Evaluated
> > Message-ID: <CE4A6242.49182%porombka at uni-paderborn.de>
> > Content-Type: text/plain; charset="iso-8859-1"
> >
> > Hi folks.
> >
> > We're currently on the way to deploy an openstack (grizzly) cloud
> > environment
> > and suffering in problems implementing the security groups like described
> > in
> > [1].
> >
> > The (hopefully) relevant configuration settings are:
> >
> > /etc/nova/nova.conf
> > [?]
> > security_group_api=quantum
> > network_api_class=nova.network.quantumv2.api.API
> > libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver
> > firewall_driver=nova.virt.firewall.NoopFirewallDriver
> > [?]
> >
> > /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini
> > [?]
> > firewall_driver =
> > quantum.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
> > [?]
> >
> > The Networks for the vm's are attached to the compute-nodes via VLAN
> > encapsulation and correctly mapped to the vm's.
> >
> > >From our point of view - we're understanding the need of the
> > "ovs-bridge <> veth glue <> linux-bridge (for filtering) <>
> > vm"-construction
> > and observed the single components in our deployment. See [2]
> >
> > Everything is working except the security groups.
> > We observed that ip-tables rules are generated for the quantum-openvswi-*
> > chains of iptables.
> > And the traffic arriving untagged (native vlan for management) on the
> > machine is processed by iptables but not
> > the traffic which arrived encapsulated.
> >
> > The traffic which is unpacked by openvswitch and is bridged via the veth
> > and
> > the tap into
> > the machine isn't processed by the iptables rules.
> >
> > We have no remaining clue/idea how to solve this issue? :(
> >
> > Greetings
> >    Sebastian
> >
> > [1]
> >
> >
> http://docs.openstack.org/trunk/openstack-network/admin/content/under_the_ho
> > od_openvswitch.html
> > [2] http://pastebin.com/WXMH6y4A
> >
> > --
> > Sebastian Porombka, M.Sc.
> > Zentrum f?r Informations- und Medientechnologien (IMT)
> > Universit?t Paderborn
> >
> > E-Mail: porombka at uni-paderborn.de
> > Tel.: 05251/60-5999
> > Fax: 05251/60-48-5999
> > Raum: N5.314
> >
> > --------------------------------------------
> > Q: Why is this email five sentences or less?
> > A: http://five.sentenc.es <http://five.sentenc.es/>
> >
> > Please consider the environment before printing this email.
> >
> >
> > -------------- next part --------------
> > An HTML attachment was scrubbed...
> > URL: <
> >
> http://lists.openstack.org/pipermail/openstack-operators/attachments/20130902/83f01473/attachment-0001.html
> > >
> > -------------- next part --------------
> > A non-text attachment was scrubbed...
> > Name: smime.p7s
> > Type: application/pkcs7-signature
> > Size: 5443 bytes
> > Desc: not available
> > URL: <
> >
> http://lists.openstack.org/pipermail/openstack-operators/attachments/20130902/83f01473/attachment-0001.bin
> > >
> >
> > ------------------------------
> >
> > Message: 2
> > Date: Mon, 2 Sep 2013 15:21:10 +0100 (BST)
> > From: Darragh O'Reilly <dara2002-openstack at yahoo.com>
> > To: Sebastian Porombka <porombka at uni-paderborn.de>,
> >         "openstack-operators at lists.openstack.org"
> >         <openstack-operators at lists.openstack.org>
> > Subject: Re: [Openstack-operators] Quantum Security Groups not working
> >         -       iptables rules are not Evaluated
> > Message-ID:
> >         <1378131670.96925.YahooMailNeo at web172405.mail.ir2.yahoo.com>
> > Content-Type: text/plain; charset=utf-8
> >
> >
> > it is not working because you are using the ovs bridge compatibility
> > module.
> >
> > Re,
> > Darragh.
> >
> > >________________________________
> > > From: Sebastian Porombka <porombka at uni-paderborn.de>
> > >To: "openstack-operators at lists.openstack.org" <
> > openstack-operators at lists.openstack.org>
> > >Sent: Monday, 2 September 2013, 14:48
> > >Subject: [Openstack-operators] Quantum Security Groups not working -
> > iptables rules are not Evaluated
> > >
> > >
> > >
> > >Hi folks.
> > >
> > >
> > >We're currently on the way to deploy an openstack (grizzly) cloud
> > environment?
> > >and suffering in problems implementing the security groups like
> described
> > in [1].
> > >
> > >
> > >The (hopefully) relevant configuration settings are:
> > >
> > >
> > >/etc/nova/nova.conf
> > >[?]
> > >security_group_api=quantum
> > >network_api_class=nova.network.quantumv2.api.API
> > >libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver
> > >firewall_driver=nova.virt.firewall.NoopFirewallDriver
> > >[?]
> > >
> > >
> > >/etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini
> > >[?]
> > >firewall_driver =
> > quantum.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
> > >[?]
> > >
> > >
> > >The Networks for the vm's are attached to the compute-nodes via VLAN?
> > >encapsulation and correctly mapped to the vm's.
> > >
> > >
> > >From our point of view - we're understanding the need of the?
> > >"ovs-bridge <> veth glue <> linux-bridge (for filtering) <>
> > vm"-construction?
> > >and observed the single components in our deployment. See [2]
> > >
> > >
> > >Everything is working except the security groups.?
> > >We observed that ip-tables rules are generated for
> the?quantum-openvswi-*
> > chains of iptables.?
> > >And the traffic arriving untagged (native vlan for management) on the
> > machine is processed by iptables but not?
> > >the traffic which arrived encapsulated.
> > >
> > >
> > >The traffic which is unpacked by openvswitch and is bridged via the veth
> > and the tap into?
> > >the machine isn't processed by the iptables rules.
> > >
> > >
> > >We have no remaining clue/idea how to solve this issue? :(
> > >
> > >
> > >Greetings
> > >? ?Sebastian
> > >
> > >
> > >[1]?
> >
> http://docs.openstack.org/trunk/openstack-network/admin/content/under_the_hood_openvswitch.html
> > >[2]?http://pastebin.com/WXMH6y4A
> > >
> > >
> > >--
> > >Sebastian Porombka,?M.Sc.?
> > >Zentrum f?r Informations- und Medientechnologien (IMT)
> > >Universit?t Paderborn
> > >
> > >
> > >E-Mail:?porombka at uni-paderborn.de
> > >Tel.: 05251/60-5999
> > >Fax:?05251/60-48-5999
> > >Raum: N5.314?
> > >
> > >
> > >--------------------------------------------
> > >Q: Why is this email five sentences or less?
> > >A:?http://five.sentenc.es
> > >
> > >
> > >Please consider the environment before printing this email.
> > >_______________________________________________
> > >OpenStack-operators mailing list
> > >OpenStack-operators at lists.openstack.org
> > >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> > >
> > >
> > >
> >
> >
> >
> > ------------------------------
> >
> > Message: 3
> > Date: Mon, 2 Sep 2013 14:41:34 +0000
> > From: Robert van Leeuwen <Robert.vanLeeuwen at spilgames.com>
> > To: "openstack-operators at lists.openstack.org"
> >         <openstack-operators at lists.openstack.org>
> > Subject: [Openstack-operators] Running DHCP agent in HA (grizzly)
> > Message-ID:
> >         <79E00D9220302D448C1D59B5224ED12D8D9C7AE1 at EchoDB02.spil.local>
> > Content-Type: text/plain; charset="us-ascii"
> >
> > Hi,
> >
> > How would one make sure multiple dhcp agents run for the same segment?
> > I currently have 2 dhcp agents running but only one is added to a network
> > when the network is created.
> >
> > When I manually add the other one with "quantum  dhcp-agent-network-add"
> > but I would like this to happen automatically.
> >
> > Thx,
> > Robert
> >
> >
> >
> >
> > ------------------------------
> >
> > Message: 4
> > Date: Mon, 2 Sep 2013 11:00:41 -0400
> > From: Lorin Hochstein <lorin at nimbisservices.com>
> > To: "Darragh O'Reilly" <dara2002-openstack at yahoo.com>
> > Cc: "openstack-operators at lists.openstack.org"
> >         <openstack-operators at lists.openstack.org>
> > Subject: Re: [Openstack-operators] Quantum Security Groups not working
> >         - iptables rules are not Evaluated
> > Message-ID:
> >         <
> > CADzpNMXaPHQCdgjCwOJDK-fC4E6KF14K_gvr5AHnwknEwH4N8w at mail.gmail.com>
> > Content-Type: text/plain; charset="windows-1252"
> >
> > Darragh:
> >
> > Can you elaborate on this a little more? Do you mean that the "brcompat"
> > kernel module has been loaded, and this breaks security groups with the
> ovs
> > plugin? Should we add something in the documentation about this?
> >
> > Lorin
> >
> >
> > Do you mean that the problem is that the ovs-brcompatd service is
> running?
> >
> > openvswitch-brcompat package is installed?
> >
> >
> > On Mon, Sep 2, 2013 at 10:21 AM, Darragh O'Reilly <
> > dara2002-openstack at yahoo.com> wrote:
> >
> > >
> > > it is not working because you are using the ovs bridge compatibility
> > > module.
> > >
> > > Re,
> > > Darragh.
> > >
> > > >________________________________
> > > > From: Sebastian Porombka <porombka at uni-paderborn.de>
> > > >To: "openstack-operators at lists.openstack.org" <
> > > openstack-operators at lists.openstack.org>
> > > >Sent: Monday, 2 September 2013, 14:48
> > > >Subject: [Openstack-operators] Quantum Security Groups not working -
> > > iptables rules are not Evaluated
> > > >
> > > >
> > > >
> > > >Hi folks.
> > > >
> > > >
> > > >We're currently on the way to deploy an openstack (grizzly) cloud
> > > environment
> > > >and suffering in problems implementing the security groups like
> > described
> > > in [1].
> > > >
> > > >
> > > >The (hopefully) relevant configuration settings are:
> > > >
> > > >
> > > >/etc/nova/nova.conf
> > > >[?]
> > > >security_group_api=quantum
> > > >network_api_class=nova.network.quantumv2.api.API
> > > >libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver
> > > >firewall_driver=nova.virt.firewall.NoopFirewallDriver
> > > >[?]
> > > >
> > > >
> > > >/etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini
> > > >[?]
> > > >firewall_driver =
> > > quantum.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
> > > >[?]
> > > >
> > > >
> > > >The Networks for the vm's are attached to the compute-nodes via VLAN
> > > >encapsulation and correctly mapped to the vm's.
> > > >
> > > >
> > > >From our point of view - we're understanding the need of the
> > > >"ovs-bridge <> veth glue <> linux-bridge (for filtering) <>
> > > vm"-construction
> > > >and observed the single components in our deployment. See [2]
> > > >
> > > >
> > > >Everything is working except the security groups.
> > > >We observed that ip-tables rules are generated for the
> > quantum-openvswi-*
> > > chains of iptables.
> > > >And the traffic arriving untagged (native vlan for management) on the
> > > machine is processed by iptables but not
> > > >the traffic which arrived encapsulated.
> > > >
> > > >
> > > >The traffic which is unpacked by openvswitch and is bridged via the
> veth
> > > and the tap into
> > > >the machine isn't processed by the iptables rules.
> > > >
> > > >
> > > >We have no remaining clue/idea how to solve this issue? :(
> > > >
> > > >
> > > >Greetings
> > > >   Sebastian
> > > >
> > > >
> > > >[1]
> > >
> >
> http://docs.openstack.org/trunk/openstack-network/admin/content/under_the_hood_openvswitch.html
> > > >[2] http://pastebin.com/WXMH6y4A
> > > >
> > > >
> > > >--
> > > >Sebastian Porombka, M.Sc.
> > > >Zentrum f?r Informations- und Medientechnologien (IMT)
> > > >Universit?t Paderborn
> > > >
> > > >
> > > >E-Mail: porombka at uni-paderborn.de
> > > >Tel.: 05251/60-5999
> > > >Fax: 05251/60-48-5999
> > > >Raum: N5.314
> > > >
> > > >
> > > >--------------------------------------------
> > > >Q: Why is this email five sentences or less?
> > > >A: http://five.sentenc.es
> > > >
> > > >
> > > >Please consider the environment before printing this email.
> > > >_______________________________________________
> > > >OpenStack-operators mailing list
> > > >OpenStack-operators at lists.openstack.org
> > > >
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> > > >
> > > >
> > > >
> > >
> > > _______________________________________________
> > > OpenStack-operators mailing list
> > > OpenStack-operators at lists.openstack.org
> > >
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> > >
> >
> >
> >
> > --
> > Lorin Hochstein
> > Lead Architect - Cloud Services
> > Nimbis Services, Inc.
> > www.nimbisservices.com
> > -------------- next part --------------
> > An HTML attachment was scrubbed...
> > URL: <
> >
> http://lists.openstack.org/pipermail/openstack-operators/attachments/20130902/60036073/attachment-0001.html
> > >
> >
> > ------------------------------
> >
> > Message: 5
> > Date: Mon, 2 Sep 2013 17:09:18 +0200
> > From: Simon Pasquier <simon.pasquier at bull.net>
> > To: <openstack-operators at lists.openstack.org>
> > Subject: Re: [Openstack-operators] Running DHCP agent in HA (grizzly)
> > Message-ID: <5224AA1E.60201 at bull.net>
> > Content-Type: text/plain; charset="ISO-8859-1"; format=flowed
> >
> > Hello,
> > I guess you are running Grizzly. It requires Havana to be able to
> > schedule automatically multiple DHCP agents per network.
> > Simon
> >
> > Le 02/09/2013 16:41, Robert van Leeuwen a ?crit :
> > > Hi,
> > >
> > > How would one make sure multiple dhcp agents run for the same segment?
> > > I currently have 2 dhcp agents running but only one is added to a
> > network when the network is created.
> > >
> > > When I manually add the other one with "quantum
>  dhcp-agent-network-add"
> > but I would like this to happen automatically.
> > >
> > > Thx,
> > > Robert
> > >
> > >
> > > _______________________________________________
> > > OpenStack-operators mailing list
> > > OpenStack-operators at lists.openstack.org
> > >
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> > >
> >
> >
> > --
> > Simon Pasquier
> > Software Engineer
> > Bull, Architect of an Open World
> > Phone: + 33 4 76 29 71 49
> > http://www.bull.com
> >
> >
> >
> > ------------------------------
> >
> > Message: 6
> > Date: Mon, 2 Sep 2013 16:28:37 +0100 (BST)
> > From: Darragh O'Reilly <dara2002-openstack at yahoo.com>
> > To: Lorin Hochstein <lorin at nimbisservices.com>
> > Cc: "openstack-operators at lists.openstack.org"
> >         <openstack-operators at lists.openstack.org>
> > Subject: Re: [Openstack-operators] Quantum Security Groups not working
> >         -       iptables rules are not Evaluated
> > Message-ID:
> >         <1378135717.92851.YahooMailNeo at web172406.mail.ir2.yahoo.com>
> > Content-Type: text/plain; charset=utf-8
> >
> > Hi Lorin,
> >
> > sure, sorry for the beverity. It seems the brcompat is being used because
> > qbr0188455b-25 appears in the 'ovs-vsctl show' output - so it was created
> > as an OVS bridge, but it should have been created as a Linux bridge. I
> have
> > never used brcompat, but I believe it intercepts calls from brctl and
> > configures OVS bridges instead of Linux bridges. I'm not sure how to
> > uninstall/disable it - it's probably an Operating System package.
> >
> > I don't think any Openstack doc says to install/enable it.
> >
> > Re,
> > Darragh.
> >
> > >________________________________
> > > From: Lorin Hochstein <lorin at nimbisservices.com>
> > >To: Darragh O'Reilly <dara2002-openstack at yahoo.com>
> > >Cc: Sebastian Porombka <porombka at uni-paderborn.de>; "
> > openstack-operators at lists.openstack.org" <
> > openstack-operators at lists.openstack.org>
> > >Sent: Monday, 2 September 2013, 16:00
> > >Subject: Re: [Openstack-operators] Quantum Security Groups not working -
> > iptables rules are not Evaluated
> > >
> > >
> > >
> > >Darragh:
> > >
> > >
> > >Can you elaborate on this a little more? Do you mean that the "brcompat"
> > kernel module has been loaded, and this breaks security groups with the
> ovs
> > plugin? Should we add something in the documentation about this??
> > >
> > >
> > >Lorin
> > >
> > >
> > >
> > >
> > >Do you mean that the problem is that the ovs-brcompatd service is
> > running??
> > >
> > >
> > >openvswitch-brcompat package is installed??
> > >
> > >
> > >
> > >On Mon, Sep 2, 2013 at 10:21 AM, Darragh O'Reilly <
> > dara2002-openstack at yahoo.com> wrote:
> > >
> > >
> > >>it is not working because you are using the ovs bridge compatibility
> > module.
> > >>
> > >>Re,
> > >>Darragh.
> > >>
> > >>>________________________________
> > >>> From: Sebastian Porombka <porombka at uni-paderborn.de>
> > >>>To: "openstack-operators at lists.openstack.org" <
> > openstack-operators at lists.openstack.org>
> > >>>Sent: Monday, 2 September 2013, 14:48
> > >>>Subject: [Openstack-operators] Quantum Security Groups not working -
> > iptables rules are not Evaluated
> > >>
> > >>>
> > >>>
> > >>>
> > >>>Hi folks.
> > >>>
> > >>>
> > >>>We're currently on the way to deploy an openstack (grizzly) cloud
> > environment?
> > >>>and suffering in problems implementing the security groups like
> > described in [1].
> > >>>
> > >>>
> > >>>The (hopefully) relevant configuration settings are:
> > >>>
> > >>>
> > >>>/etc/nova/nova.conf
> > >>>[?]
> > >>>security_group_api=quantum
> > >>>network_api_class=nova.network.quantumv2.api.API
> > >>>libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver
> > >>>firewall_driver=nova.virt.firewall.NoopFirewallDriver
> > >>>[?]
> > >>>
> > >>>
> > >>>/etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini
> > >>>[?]
> > >>>firewall_driver =
> > quantum.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
> > >>>[?]
> > >>>
> > >>>
> > >>>The Networks for the vm's are attached to the compute-nodes via VLAN?
> > >>>encapsulation and correctly mapped to the vm's.
> > >>>
> > >>>
> > >>>From our point of view - we're understanding the need of the?
> > >>>"ovs-bridge <> veth glue <> linux-bridge (for filtering) <>
> > vm"-construction?
> > >>>and observed the single components in our deployment. See [2]
> > >>>
> > >>>
> > >>>Everything is working except the security groups.?
> > >>>We observed that ip-tables rules are generated for
> > the?quantum-openvswi-* chains of iptables.?
> > >>>And the traffic arriving untagged (native vlan for management) on the
> > machine is processed by iptables but not?
> > >>>the traffic which arrived encapsulated.
> > >>>
> > >>>
> > >>>The traffic which is unpacked by openvswitch and is bridged via the
> > veth and the tap into?
> > >>>the machine isn't processed by the iptables rules.
> > >>>
> > >>>
> > >>>We have no remaining clue/idea how to solve this issue? :(
> > >>>
> > >>>
> > >>>Greetings
> > >>>? ?Sebastian
> > >>>
> > >>>
> > >>>[1]?
> >
> http://docs.openstack.org/trunk/openstack-network/admin/content/under_the_hood_openvswitch.html
> > >>>[2]?http://pastebin.com/WXMH6y4A
> > >>>
> > >>>
> > >>>--
> > >>>Sebastian Porombka,?M.Sc.?
> > >>>Zentrum f?r Informations- und Medientechnologien (IMT)
> > >>>Universit?t Paderborn
> > >>>
> > >>>
> > >>>E-Mail:?porombka at uni-paderborn.de
> > >>>Tel.: 05251/60-5999
> > >>>Fax:?05251/60-48-5999
> > >>>Raum: N5.314?
> > >>>
> > >>>
> > >>>--------------------------------------------
> > >>>Q: Why is this email five sentences or less?
> > >>>A:?http://five.sentenc.es
> > >>>
> > >>>
> > >>>Please consider the environment before printing this email.
> > >>>_______________________________________________
> > >>>OpenStack-operators mailing list
> > >>>OpenStack-operators at lists.openstack.org
> > >>>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> > >>>
> > >>>
> > >>>
> > >>
> > >>_______________________________________________
> > >>OpenStack-operators mailing list
> > >>OpenStack-operators at lists.openstack.org
> > >>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> > >>
> > >
> > >
> > >
> > >--
> > >
> > >Lorin Hochstein
> > >
> > >Lead Architect - Cloud Services
> > >Nimbis Services, Inc.
> > >www.nimbisservices.com
> > >
> > >
> >
> >
> >
> > ------------------------------
> >
> > Message: 7
> > Date: Mon, 2 Sep 2013 12:51:55 -0300
> > From: Juan Jos? Pavlik Salles <jjpavlik at gmail.com>
> > To: "openstack-operators at lists.openstack.org"
> >         <openstack-operators at lists.openstack.org>
> > Subject: [Openstack-operators] Migrating instances in grizzly
> > Message-ID:
> >         <
> > CAKCETkePaRLJz4U96QLhB4_qFgs9R_YUjdVtf4yq8Jbdepoquw at mail.gmail.com>
> > Content-Type: text/plain; charset="iso-8859-1"
> >
> > Hi guys, last friday i started testing live-migration in my grizzly cloud
> > with shared storage (gfs2) but i run into a problem, a little weird:
> >
> > This is the status before migrating:
> >
> > -I've p9 instances also called instance-00000022 running on "acelga"
> > compute node.
> >
> > *root at acelga:~/tools# virsh list*
> > * Id    Name                           State*
> > *----------------------------------------------------*
> > * 6     instance-00000022              running*
> > *
> > *
> > *root at acelga:~/tools# *
> > *
> > *
> > *
> > *
> > *root at cebolla:~/tool# virsh list*
> > * Id    Nombre                         Estado*
> > *----------------------------------------------------*
> > *
> > *
> > *root at cebolla:~/tool# *
> >
> > -Here you can see all the info about the instance
> >
> > *root at cebolla:~/tool# nova --os-username=noc-admin --os-tenant-name=noc
> > --os-password=XXXXXXX --os-auth-url http://172.19.136.1:35357/v2.0 show
> > de2bcbed-f7b6-40cd-89ca-acf6fe2f2d09*
> > *
> >
> >
> +-------------------------------------+-----------------------------------------------------------+
> > *
> > *| Property                            | Value
> >                         |*
> > *
> >
> >
> +-------------------------------------+-----------------------------------------------------------+
> > *
> > *| status                              | ACTIVE
> >                        |*
> > *| updated                             | 2013-09-02T15:27:39Z
> >                        |*
> > *| OS-EXT-STS:task_state               | None
> >                        |*
> > *| OS-EXT-SRV-ATTR:host                | acelga
> >                        |*
> > *| key_name                            | None
> >                        |*
> > *| image                               | Ubuntu 12.04.2 LTS
> > (1359ca8d-23a2-40e8-940f-d90b3e68bb39) |*
> > *| vlan1 network                       | 172.16.16.175
> >                         |*
> > *| hostId                              |
> > 81be94870821e17e327d92e9c80548ffcdd37d24054a235116669f53  |*
> > *| OS-EXT-STS:vm_state                 | active
> >                        |*
> > *| OS-EXT-SRV-ATTR:instance_name       | instance-00000022
> >                         |*
> > *| OS-EXT-SRV-ATTR:hypervisor_hostname | acelga.psi.unc.edu.ar
> >                         |*
> > *| flavor                              | m1.tiny (1)
> >                         |*
> > *| id                                  |
> > de2bcbed-f7b6-40cd-89ca-acf6fe2f2d09                      |*
> > *| security_groups                     | [{u'name': u'default'}]
> >                         |*
> > *| user_id                             | 20390b639d4449c18926dca5e038ec5e
> >                        |*
> > *| name                                | p9
> >                        |*
> > *| created                             | 2013-09-02T15:27:06Z
> >                        |*
> > *| tenant_id                           | d1e3aae242f14c488d2225dcbf1e96d6
> >                        |*
> > *| OS-DCF:diskConfig                   | MANUAL
> >                        |*
> > *| metadata                            | {}
> >                        |*
> > *| accessIPv4                          |
> >                         |*
> > *| accessIPv6                          |
> >                         |*
> > *| progress                            | 0
> >                         |*
> > *| OS-EXT-STS:power_state              | 1
> >                         |*
> > *| OS-EXT-AZ:availability_zone         | nova
> >                        |*
> > *| config_drive                        |
> >                         |*
> > *
> >
> >
> +-------------------------------------+-----------------------------------------------------------+
> > *
> > *root at cebolla:~/tool#*
> >
> > -So i try to move it to the other node "cebolla"
> >
> > *root at acelga:~/tools# nova --os-username=noc-admin --os-tenant-name=noc
> > --os-password=HjZ5V9yj --os-auth-url
> > http://172.19.136.1:35357/v2.0live-migration
> > de2bcbed-f7b6-40cd-89ca-acf6fe2f2d09 cebolla
> > *
> > *root at acelga:~/tools# virsh list*
> > * Id    Name                           State*
> > *----------------------------------------------------*
> > *
> > *
> > *root at acelga:~/tools#*
> >
> > No error messages at all on "acelga" compute node so far. If i check the
> > other node i can see the instance've been migrated
> >
> > *root at cebolla:~/tool# virsh list*
> > * Id    Nombre                         Estado*
> > *----------------------------------------------------*
> > * 11    instance-00000022              ejecutando*
> > *
> > *
> > *root at cebolla:~/tool#*
> >
> >
> > -BUT... after a few seconds i get this on "acelga"'s nova-compute.log
> >
> >
> > *2013-09-02 15:35:45.784 4601 DEBUG nova.openstack.common.rpc.common [-]
> > Timed out waiting for RPC response: timed out _error_callback
> >
> >
> /usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/impl_kombu.py:628
> > *
> > *2013-09-02 15:35:45.790 4601 ERROR nova.utils [-] in fixed duration
> > looping call*
> > *2013-09-02 15:35:45.790 4601 TRACE nova.utils Traceback (most recent
> call
> > last):*
> > *2013-09-02 15:35:45.790 4601 TRACE nova.utils   File
> > "/usr/lib/python2.7/dist-packages/nova/utils.py", line 594, in _inner*
> > *2013-09-02 15:35:45.790 4601 TRACE nova.utils     self.f(*self.args, **
> > self.kw)*
> > *2013-09-02 15:35:45.790 4601 TRACE nova.utils   File
> > "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line
> 3129,
> > in wait_for_live_migration*
> > *2013-09-02 15:35:45.790 4601 TRACE nova.utils     migrate_data)*
> > *2013-09-02 15:35:45.790 4601 TRACE nova.utils   File
> > "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 3208, in
> > _post_live_migration*
> > *2013-09-02 15:35:45.790 4601 TRACE nova.utils     migration)*
> > *2013-09-02 15:35:45.790 4601 TRACE nova.utils   File
> > "/usr/lib/python2.7/dist-packages/nova/conductor/api.py", line 664, in
> > network_migrate_instance_start*
> > *2013-09-02 15:35:45.790 4601 TRACE nova.utils     migration)*
> > *2013-09-02 15:35:45.790 4601 TRACE nova.utils   File
> > "/usr/lib/python2.7/dist-packages/nova/conductor/rpcapi.py", line 415, in
> > network_migrate_instance_start*
> > *2013-09-02 15:35:45.790 4601 TRACE nova.utils     return
> > self.call(context, msg, version='1.41')*
> > *2013-09-02 15:35:45.790 4601 TRACE nova.utils   File
> > "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/proxy.py",
> line
> > 80, in call*
> > *2013-09-02 15:35:45.790 4601 TRACE nova.utils     return
> rpc.call(context,
> > self._get_topic(topic), msg, timeout)*
> > *2013-09-02 15:35:45.790 4601 TRACE nova.utils   File
> > "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/__init__.py",
> > line 140, in call*
> > *2013-09-02 15:35:45.790 4601 TRACE nova.utils     return
> > _get_impl().call(CONF, context, topic, msg, timeout)*
> > *2013-09-02 15:35:45.790 4601 TRACE nova.utils   File
> >
> "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/impl_kombu.py",
> > line 798, in call*
> > *2013-09-02 15:35:45.790 4601 TRACE nova.utils
> > rpc_amqp.get_connection_pool(conf, Connection))*
> > *2013-09-02 15:35:45.790 4601 TRACE nova.utils   File
> > "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py",
> line
> > 612, in call*
> > *2013-09-02 15:35:45.790 4601 TRACE nova.utils     rv = list(rv)*
> > *2013-09-02 15:35:45.790 4601 TRACE nova.utils   File
> > "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py",
> line
> > 554, in __iter__*
> > *2013-09-02 15:35:45.790 4601 TRACE nova.utils     self.done()*
> > *2013-09-02 15:35:45.790 4601 TRACE nova.utils   File
> > "/usr/lib/python2.7/contextlib.py", line 24, in __exit__*
> > *2013-09-02 15:35:45.790 4601 TRACE nova.utils     self.gen.next()*
> > *2013-09-02 15:35:45.790 4601 TRACE nova.utils   File
> > "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py",
> line
> > 551, in __iter__*
> > *2013-09-02 15:35:45.790 4601 TRACE nova.utils     self._iterator.next()*
> > *2013-09-02 15:35:45.790 4601 TRACE nova.utils   File
> >
> "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/impl_kombu.py",
> > line 648, in iterconsume*
> > *2013-09-02 15:35:45.790 4601 TRACE nova.utils     yield
> > self.ensure(_error_callback, _consume)*
> > *2013-09-02 15:35:45.790 4601 TRACE nova.utils   File
> >
> "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/impl_kombu.py",
> > line 566, in ensure*
> > *2013-09-02 15:35:45.790 4601 TRACE nova.utils     error_callback(e)*
> > *2013-09-02 15:35:45.790 4601 TRACE nova.utils   File
> >
> "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/impl_kombu.py",
> > line 629, in _error_callback*
> > *2013-09-02 15:35:45.790 4601 TRACE nova.utils     raise
> > rpc_common.Timeout()*
> > *2013-09-02 15:35:45.790 4601 TRACE nova.utils Timeout: Timeout while
> > waiting on RPC response.*
> > *2013-09-02 15:35:45.790 4601 TRACE nova.utils*
> >
> >
> > -And the VM state never changes back to ACTIVE from MIGRATING:
> >
> >
> > *root at cebolla:~/tool# nova --os-username=noc-admin --os-tenant-name=noc
> > --os-password=XXXXX --os-auth-url http://172.19.136.1:35357/v2.0 show
> > de2bcbed-f7b6-40cd-89ca-acf6fe2f2d09*
> > *
> >
> >
> +-------------------------------------+-----------------------------------------------------------+
> > *
> > *| Property                            | Value
> >                         |*
> > *
> >
> >
> +-------------------------------------+-----------------------------------------------------------+
> > *
> > *| status                              | MIGRATING
> >                         |*
> > *| updated                             | 2013-09-02T15:33:54Z
> >                        |*
> > *| OS-EXT-STS:task_state               | migrating
> >                         |*
> > *| OS-EXT-SRV-ATTR:host                | acelga
> >                        |*
> > *| key_name                            | None
> >                        |*
> > *| image                               | Ubuntu 12.04.2 LTS
> > (1359ca8d-23a2-40e8-940f-d90b3e68bb39) |*
> > *| vlan1 network                       | 172.16.16.175
> >                         |*
> > *| hostId                              |
> > 81be94870821e17e327d92e9c80548ffcdd37d24054a235116669f53  |*
> > *| OS-EXT-STS:vm_state                 | active
> >                        |*
> > *| OS-EXT-SRV-ATTR:instance_name       | instance-00000022
> >                         |*
> > *| OS-EXT-SRV-ATTR:hypervisor_hostname | acelga.psi.unc.edu.ar
> >                         |*
> > *| flavor                              | m1.tiny (1)
> >                         |*
> > *| id                                  |
> > de2bcbed-f7b6-40cd-89ca-acf6fe2f2d09                      |*
> > *| security_groups                     | [{u'name': u'default'}]
> >                         |*
> > *| user_id                             | 20390b639d4449c18926dca5e038ec5e
> >                        |*
> > *| name                                | p9
> >                        |*
> > *| created                             | 2013-09-02T15:27:06Z
> >                        |*
> > *| tenant_id                           | d1e3aae242f14c488d2225dcbf1e96d6
> >                        |*
> > *| OS-DCF:diskConfig                   | MANUAL
> >                        |*
> > *| metadata                            | {}
> >                        |*
> > *| accessIPv4                          |
> >                         |*
> > *| accessIPv6                          |
> >                         |*
> > *| OS-EXT-STS:power_state              | 1
> >                         |*
> > *| OS-EXT-AZ:availability_zone         | nova
> >                        |*
> > *| config_drive                        |
> >                         |*
> > *
> >
> >
> +-------------------------------------+-----------------------------------------------------------+
> > *
> > *root at cebolla:~/tool#*
> >
> >
> > Funny fact:
> > -The vm still answer ping after migration, so i think this is good.
> >
> > Any ideas about this problem? At first i thought it could be related to a
> > connection problem between the nodes, but the VM migrates completly in
> > hipervisor level somehow there is some "instance've been migrated ACK"
> > missing.
> >
> >
> > --
> > Pavlik Salles Juan Jos?
> > -------------- next part --------------
> > An HTML attachment was scrubbed...
> > URL: <
> >
> http://lists.openstack.org/pipermail/openstack-operators/attachments/20130902/e8b7058b/attachment-0001.html
> > >
> >
> > ------------------------------
> >
> > Message: 8
> > Date: Mon, 2 Sep 2013 19:12:40 +0000
> > From: Sebastian Porombka <porombka at uni-paderborn.de>
> > To: "openstack-operators at lists.openstack.org"
> >         <openstack-operators at lists.openstack.org>
> > Cc: Holger Nitsche <hn at uni-paderborn.de>
> > Subject: Re: [Openstack-operators] Quantum Security Groups not working
> >         - iptables rules are not Evaluated
> > Message-ID: <CE4AAA44.491DD%porombka at uni-paderborn.de>
> > Content-Type: text/plain; charset="iso-8859-1"
> >
> > Hi
> >
> > Yes, openvswitch-brcompat (as ubuntu package) was installed.
> > Uninstalling the package removes the qbr* interfaces in
> > 'ovs-vsctl show' and solved the problem. Big thanks to you.
> >
> > Maybe a short sentence in the documentation would be nice. :)
> >
> > Greetings
> >   Sebastian
> >
> > --
> > Sebastian Porombka, M.Sc.
> > Zentrum f?r Informations- und Medientechnologien (IMT)
> > Universit?t Paderborn
> >
> > E-Mail: porombka at uni-paderborn.de
> > Tel.: 05251/60-5999
> > Fax: 05251/60-48-5999
> > Raum: N5.314
> >
> > --------------------------------------------
> > Q: Why is this email five sentences or less?
> > A: http://five.sentenc.es
> >
> > Please consider the environment before printing this email.
> >
> >
> >
> >
> >
> > Am 02.09.13 17:28 schrieb "Darragh O'Reilly" unter
> > <dara2002-openstack at yahoo.com>:
> >
> > >Hi Lorin,
> > >
> > >sure, sorry for the beverity. It seems the brcompat is being used
> because
> > >qbr0188455b-25 appears in the 'ovs-vsctl show' output - so it was
> created
> > >as an OVS bridge, but it should have been created as a Linux bridge. I
> > >have never used brcompat, but I believe it intercepts calls from brctl
> > >and configures OVS bridges instead of Linux bridges. I'm not sure how to
> > >uninstall/disable it - it's probably an Operating System package.
> > >
> > >I don't think any Openstack doc says to install/enable it.
> > >
> > >Re,
> > >Darragh.
> > >
> > >>________________________________
> > >> From: Lorin Hochstein <lorin at nimbisservices.com>
> > >>To: Darragh O'Reilly <dara2002-openstack at yahoo.com>
> > >>Cc: Sebastian Porombka <porombka at uni-paderborn.de>;
> > >>"openstack-operators at lists.openstack.org"
> > >><openstack-operators at lists.openstack.org>
> > >>Sent: Monday, 2 September 2013, 16:00
> > >>Subject: Re: [Openstack-operators] Quantum Security Groups not working
> -
> > >>iptables rules are not Evaluated
> > >>
> > >>
> > >>
> > >>Darragh:
> > >>
> > >>
> > >>Can you elaborate on this a little more? Do you mean that the
> "brcompat"
> > >>kernel module has been loaded, and this breaks security groups with the
> > >>ovs plugin? Should we add something in the documentation about this?
> > >>
> > >>
> > >>Lorin
> > >>
> > >>
> > >>
> > >>
> > >>Do you mean that the problem is that the ovs-brcompatd service is
> > >>running?
> > >>
> > >>
> > >>openvswitch-brcompat package is installed?
> > >>
> > >>
> > >>
> > >>On Mon, Sep 2, 2013 at 10:21 AM, Darragh O'Reilly
> > >><dara2002-openstack at yahoo.com> wrote:
> > >>
> > >>
> > >>>it is not working because you are using the ovs bridge compatibility
> > >>>module.
> > >>>
> > >>>Re,
> > >>>Darragh.
> > >>>
> > >>>>________________________________
> > >>>> From: Sebastian Porombka <porombka at uni-paderborn.de>
> > >>>>To: "openstack-operators at lists.openstack.org"
> > >>>><openstack-operators at lists.openstack.org>
> > >>>>Sent: Monday, 2 September 2013, 14:48
> > >>>>Subject: [Openstack-operators] Quantum Security Groups not working -
> > >>>>iptables rules are not Evaluated
> > >>>
> > >>>>
> > >>>>
> > >>>>
> > >>>>Hi folks.
> > >>>>
> > >>>>
> > >>>>We're currently on the way to deploy an openstack (grizzly) cloud
> > >>>>environment
> > >>>>and suffering in problems implementing the security groups like
> > >>>>described in [1].
> > >>>>
> > >>>>
> > >>>>The (hopefully) relevant configuration settings are:
> > >>>>
> > >>>>
> > >>>>/etc/nova/nova.conf
> > >>>>[?]
> > >>>>security_group_api=quantum
> > >>>>network_api_class=nova.network.quantumv2.api.API
> > >>>>libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver
> > >>>>firewall_driver=nova.virt.firewall.NoopFirewallDriver
> > >>>>[?]
> > >>>>
> > >>>>
> > >>>>/etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini
> > >>>>[?]
> > >>>>firewall_driver =
> > >>>>quantum.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
> > >>>>[?]
> > >>>>
> > >>>>
> > >>>>The Networks for the vm's are attached to the compute-nodes via VLAN
> > >>>>encapsulation and correctly mapped to the vm's.
> > >>>>
> > >>>>
> > >>>>From our point of view - we're understanding the need of the
> > >>>>"ovs-bridge <> veth glue <> linux-bridge (for filtering) <>
> > >>>>vm"-construction
> > >>>>and observed the single components in our deployment. See [2]
> > >>>>
> > >>>>
> > >>>>Everything is working except the security groups.
> > >>>>We observed that ip-tables rules are generated for the
> > >>>>quantum-openvswi-* chains of iptables.
> > >>>>And the traffic arriving untagged (native vlan for management) on the
> > >>>>machine is processed by iptables but not
> > >>>>the traffic which arrived encapsulated.
> > >>>>
> > >>>>
> > >>>>The traffic which is unpacked by openvswitch and is bridged via the
> > >>>>veth and the tap into
> > >>>>the machine isn't processed by the iptables rules.
> > >>>>
> > >>>>
> > >>>>We have no remaining clue/idea how to solve this issue? :(
> > >>>>
> > >>>>
> > >>>>Greetings
> > >>>>   Sebastian
> > >>>>
> > >>>>
> > >>>>[1]
> > >>>>
> > http://docs.openstack.org/trunk/openstack-network/admin/content/under_t
> > >>>>he_hood_openvswitch.html
> > >>>>[2] http://pastebin.com/WXMH6y4A
> > >>>>
> > >>>>
> > >>>>--
> > >>>>Sebastian Porombka, M.Sc.
> > >>>>Zentrum f?r Informations- und Medientechnologien (IMT)
> > >>>>Universit?t Paderborn
> > >>>>
> > >>>>
> > >>>>E-Mail: porombka at uni-paderborn.de
> > >>>>Tel.: 05251/60-5999
> > >>>>Fax: 05251/60-48-5999
> > >>>>Raum: N5.314
> > >>>>
> > >>>>
> > >>>>--------------------------------------------
> > >>>>Q: Why is this email five sentences or less?
> > >>>>A: http://five.sentenc.es
> > >>>>
> > >>>>
> > >>>>Please consider the environment before printing this email.
> > >>>>_______________________________________________
> > >>>>OpenStack-operators mailing list
> > >>>>OpenStack-operators at lists.openstack.org
> > >>>>
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> > >>>>
> > >>>>
> > >>>>
> > >>>
> > >>>_______________________________________________
> > >>>OpenStack-operators mailing list
> > >>>OpenStack-operators at lists.openstack.org
> > >>>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> > >>>
> > >>
> > >>
> > >>
> > >>--
> > >>
> > >>Lorin Hochstein
> > >>
> > >>Lead Architect - Cloud Services
> > >>Nimbis Services, Inc.
> > >>www.nimbisservices.com
> > >>
> > >>
> > -------------- next part --------------
> > A non-text attachment was scrubbed...
> > Name: smime.p7s
> > Type: application/pkcs7-signature
> > Size: 5443 bytes
> > Desc: not available
> > URL: <
> >
> http://lists.openstack.org/pipermail/openstack-operators/attachments/20130902/30940657/attachment-0001.bin
> > >
> >
> > ------------------------------
> >
> > Message: 9
> > Date: Mon, 2 Sep 2013 21:44:09 -0300
> > From: Juan Jos? Pavlik Salles <jjpavlik at gmail.com>
> > To: "openstack-operators at lists.openstack.org"
> >         <openstack-operators at lists.openstack.org>
> > Subject: Re: [Openstack-operators] Migrating instances in grizzly
> > Message-ID:
> >         <
> > CAKCETkfuguB_AAdY9iYfddO65hNF-jVvhiGu62XFu3XhvaoBuQ at mail.gmail.com>
> > Content-Type: text/plain; charset="iso-8859-1"
> >
> > I've also found this in nova-conductor.log:
> >
> > 2013-09-02 15:35:27.208 DEBUG nova.openstack.common.rpc.common
> > [req-e0473533-89af-4ff5-b6fa-4b0b6eb50a6d
> 31020076174943bdb7486c330a298d93
> > d1e3aae242f14c488d2225dc
> > bf1e96d6] Timed out waiting for RPC response: timed out _error_callback
> >
> >
> /usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/impl_kombu.py:628
> > 2013-09-02 15:35:27.222 ERROR nova.openstack.common.rpc.amqp
> > [req-e0473533-89af-4ff5-b6fa-4b0b6eb50a6d
> 31020076174943bdb7486c330a298d93
> > d1e3aae242f14c488d2225dcbf
> > 1e96d6] Exception during message handling
> > 2013-09-02 15:35:27.222 1363 TRACE nova.openstack.common.rpc.amqp
> Traceback
> > (most recent call last):
> > 2013-09-02 15:35:27.222 1363 TRACE nova.openstack.common.rpc.amqp   File
> > "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py",
> line
> > 430, in _proce
> > ss_data
> > 2013-09-02 15:35:27.222 1363 TRACE nova.openstack.common.rpc.amqp
> rval
> > = self.proxy.dispatch(ctxt, version, method, **args)
> > 2013-09-02 15:35:27.222 1363 TRACE nova.openstack.common.rpc.amqp   File
> >
> "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/dispatcher.py",
> > line 133, in
> > dispatch
> > 2013-09-02 15:35:27.222 1363 TRACE nova.openstack.common.rpc.amqp
> > return getattr(proxyobj, method)(ctxt, **kwargs)
> > 2013-09-02 15:35:27.222 1363 TRACE nova.openstack.common.rpc.amqp   File
> > "/usr/lib/python2.7/dist-packages/nova/conductor/manager.py", line 399,
> in
> > network_migrat
> > e_instance_start
> > 2013-09-02 15:35:27.222 1363 TRACE nova.openstack.common.rpc.amqp
> > self.network_api.migrate_instance_start(context, instance, migration)
> > 2013-09-02 15:35:27.222 1363 TRACE nova.openstack.common.rpc.amqp   File
> > "/usr/lib/python2.7/dist-packages/nova/network/api.py", line 89, in
> wrapped
> > 2013-09-02 15:35:27.222 1363 TRACE nova.openstack.common.rpc.amqp
> > return func(self, context, *args, **kwargs)
> > 2013-09-02 15:35:27.222 1363 TRACE nova.openstack.common.rpc.amqp   File
> > "/usr/lib/python2.7/dist-packages/nova/network/api.py", line 501, in
> > migrate_instance_sta
> > rt
> > 2013-09-02 15:35:27.222 1363 TRACE nova.openstack.common.rpc.amqp
> > self.network_rpcapi.migrate_instance_start(context, **args)
> > 2013-09-02 15:35:27.222 1363 TRACE nova.openstack.common.rpc.amqp   File
> > "/usr/lib/python2.7/dist-packages/nova/network/rpcapi.py", line 333, in
> > migrate_instance_
> > start
> > 2013-09-02 15:35:27.222 1363 TRACE nova.openstack.common.rpc.amqp
> > version='1.2')
> > 2013-09-02 15:35:27.222 1363 TRACE nova.openstack.common.rpc.amqp   File
> > "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/proxy.py",
> line
> > 80, in call
> > 2013-09-02 15:35:27.222 1363 TRACE nova.openstack.common.rpc.amqp
> > return rpc.call(context, self._get_topic(topic), msg, timeout)
> > 2013-09-02 15:35:27.222 1363 TRACE nova.openstack.common.rpc.amqp   File
> > "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/__init__.py",
> > line 140, in ca
> > ll
> > 2013-09-02 15:35:27.222 1363 TRACE nova.openstack.common.rpc.amqp
> > return _get_impl().call(CONF, context, topic, msg, timeout)
> > 2013-09-02 15:35:27.222 1363 TRACE nova.openstack.common.rpc.amqp   File
> >
> "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/impl_kombu.py",
> > line 798, in
> > call
> > 2013-09-02 15:35:27.222 1363 TRACE nova.openstack.common.rpc.amqp
> > rpc_amqp.get_connection_pool(conf, Connection))
> > 2013-09-02 15:35:27.222 1363 TRACE nova.openstack.common.rpc.amqp   File
> > "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py",
> line
> > 612, in call
> > 2013-09-02 15:35:27.222 1363 TRACE nova.openstack.common.rpc.amqp     rv
> =
> > list(rv)
> > 2013-09-02 15:35:27.222 1363 TRACE nova.openstack.common.rpc.amqp   File
> > "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py",
> line
> > 554, in __iter__
> > 2013-09-02 15:35:27.222 1363 TRACE nova.openstack.common.rpc.amqp
> > self.done()
> > 2013-09-02 15:35:27.222 1363 TRACE nova.openstack.common.rpc.amqp   File
> > "/usr/lib/python2.7/contextlib.py", line 24, in __exit__
> > 2013-09-02 15:35:27.222 1363 TRACE nova.openstack.common.rpc.amqp
> > self.gen.next()
> > 2013-09-02 15:35:27.222 1363 TRACE nova.openstack.common.rpc.amqp   File
> > "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py",
> line
> > 551, in __iter__
> > 2013-09-02 15:35:27.222 1363 TRACE nova.openstack.common.rpc.amqp
> > self._iterator.next()
> > 2013-09-02 15:35:27.222 1363 TRACE nova.openstack.common.rpc.amqp   File
> >
> "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/impl_kombu.py",
> > line 648, in iterconsume
> > 2013-09-02 15:35:27.222 1363 TRACE nova.openstack.common.rpc.amqp
> yield
> > self.ensure(_error_callback, _consume)
> > 2013-09-02 15:35:27.222 1363 TRACE nova.openstack.common.rpc.amqp   File
> >
> "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/impl_kombu.py",
> > line 566, in ensure
> > 2013-09-02 15:35:27.222 1363 TRACE nova.openstack.common.rpc.amqp
> > error_callback(e)
> > 2013-09-02 15:35:27.222 1363 TRACE nova.openstack.common.rpc.amqp   File
> >
> "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/impl_kombu.py",
> > line 629, in _error_callback
> > 2013-09-02 15:35:27.222 1363 TRACE nova.openstack.common.rpc.amqp
> raise
> > rpc_common.Timeout()
> > 2013-09-02 15:35:27.222 1363 TRACE nova.openstack.common.rpc.amqp
> Timeout:
> > Timeout while waiting on RPC response.
> > 2013-09-02 15:35:27.222 1363 TRACE nova.openstack.common.rpc.amqp
> > 2013-09-02 15:35:27.237 ERROR nova.openstack.common.rpc.common
> > [req-e0473533-89af-4ff5-b6fa-4b0b6eb50a6d
> 31020076174943bdb7486c330a298d93
> > d1e3aae242f14c488d2225dcbf1e96d6] Returning exception Timeout while
> waiting
> > on RPC response. to caller
> >
> > Does anybody know all the steps that take to live-migrate an instance ??
> It
> > seems to be stopping inside the network_migrate_instance_start function,
> > really no clue at all...
> >
> >
> > 2013/9/2 Juan Jos? Pavlik Salles <jjpavlik at gmail.com>
> >
> > > Hi guys, last friday i started testing live-migration in my grizzly
> cloud
> > > with shared storage (gfs2) but i run into a problem, a little weird:
> > >
> > > This is the status before migrating:
> > >
> > > -I've p9 instances also called instance-00000022 running on "acelga"
> > > compute node.
> > >
> > > *root at acelga:~/tools# virsh list*
> > > * Id    Name                           State*
> > > *----------------------------------------------------*
> > > * 6     instance-00000022              running*
> > > *
> > > *
> > > *root at acelga:~/tools# *
> > > *
> > > *
> > > *
> > > *
> > > *root at cebolla:~/tool# virsh list*
> > > * Id    Nombre                         Estado*
> > > *----------------------------------------------------*
> > > *
> > > *
> > > *root at cebolla:~/tool# *
> > >
> > > -Here you can see all the info about the instance
> > >
> > > *root at cebolla:~/tool# nova --os-username=noc-admin
> --os-tenant-name=noc
> > > --os-password=XXXXXXX --os-auth-url http://172.19.136.1:35357/v2.0show
> > > de2bcbed-f7b6-40cd-89ca-acf6fe2f2d09*
> > > *
> > >
> >
> +-------------------------------------+-----------------------------------------------------------+
> > > *
> > > *| Property                            | Value
> > >                           |*
> > > *
> > >
> >
> +-------------------------------------+-----------------------------------------------------------+
> > > *
> > > *| status                              | ACTIVE
> > >                          |*
> > > *| updated                             | 2013-09-02T15:27:39Z
> > >                          |*
> > > *| OS-EXT-STS:task_state               | None
> > >                          |*
> > > *| OS-EXT-SRV-ATTR:host                | acelga
> > >                          |*
> > > *| key_name                            | None
> > >                          |*
> > > *| image                               | Ubuntu 12.04.2 LTS
> > > (1359ca8d-23a2-40e8-940f-d90b3e68bb39) |*
> > > *| vlan1 network                       | 172.16.16.175
> > >                           |*
> > > *| hostId                              |
> > > 81be94870821e17e327d92e9c80548ffcdd37d24054a235116669f53  |*
> > > *| OS-EXT-STS:vm_state                 | active
> > >                          |*
> > > *| OS-EXT-SRV-ATTR:instance_name       | instance-00000022
> > >                           |*
> > > *| OS-EXT-SRV-ATTR:hypervisor_hostname | acelga.psi.unc.edu.ar
> > >                           |*
> > > *| flavor                              | m1.tiny (1)
> > >                           |*
> > > *| id                                  |
> > > de2bcbed-f7b6-40cd-89ca-acf6fe2f2d09                      |*
> > > *| security_groups                     | [{u'name': u'default'}]
> > >                           |*
> > > *| user_id                             |
> 20390b639d4449c18926dca5e038ec5e
> > >                          |*
> > > *| name                                | p9
> > >                          |*
> > > *| created                             | 2013-09-02T15:27:06Z
> > >                          |*
> > > *| tenant_id                           |
> d1e3aae242f14c488d2225dcbf1e96d6
> > >                          |*
> > > *| OS-DCF:diskConfig                   | MANUAL
> > >                          |*
> > > *| metadata                            | {}
> > >                          |*
> > > *| accessIPv4                          |
> > >                           |*
> > > *| accessIPv6                          |
> > >                           |*
> > > *| progress                            | 0
> > >                           |*
> > > *| OS-EXT-STS:power_state              | 1
> > >                           |*
> > > *| OS-EXT-AZ:availability_zone         | nova
> > >                          |*
> > > *| config_drive                        |
> > >                           |*
> > > *
> > >
> >
> +-------------------------------------+-----------------------------------------------------------+
> > > *
> > > *root at cebolla:~/tool#*
> > >
> > > -So i try to move it to the other node "cebolla"
> > >
> > > *root at acelga:~/tools# nova --os-username=noc-admin
> --os-tenant-name=noc
> > > --os-password=HjZ5V9yj --os-auth-url
> >
> http://172.19.136.1:35357/v2.0live-migrationde2bcbed-f7b6-40cd-89ca-acf6fe2f2d09cebolla
> > > *
> > > *root at acelga:~/tools# virsh list*
> > > * Id    Name                           State*
> > > *----------------------------------------------------*
> > > *
> > > *
> > > *root at acelga:~/tools#*
> > >
> > > No error messages at all on "acelga" compute node so far. If i check
> the
> > > other node i can see the instance've been migrated
> > >
> > > *root at cebolla:~/tool# virsh list*
> > > * Id    Nombre                         Estado*
> > > *----------------------------------------------------*
> > > * 11    instance-00000022              ejecutando*
> > > *
> > > *
> > > *root at cebolla:~/tool#*
> > >
> > >
> > > -BUT... after a few seconds i get this on "acelga"'s nova-compute.log
> > >
> > >
> > > *2013-09-02 15:35:45.784 4601 DEBUG nova.openstack.common.rpc.common
> [-]
> > > Timed out waiting for RPC response: timed out _error_callback
> > >
> >
> /usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/impl_kombu.py:628
> > > *
> > > *2013-09-02 15:35:45.790 4601 ERROR nova.utils [-] in fixed duration
> > > looping call*
> > > *2013-09-02 15:35:45.790 4601 TRACE nova.utils Traceback (most recent
> > > call last):*
> > > *2013-09-02 15:35:45.790 4601 TRACE nova.utils   File
> > > "/usr/lib/python2.7/dist-packages/nova/utils.py", line 594, in _inner*
> > > *2013-09-02 15:35:45.790 4601 TRACE nova.utils     self.f(*self.args,
> **
> > > self.kw)*
> > > *2013-09-02 15:35:45.790 4601 TRACE nova.utils   File
> > > "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line
> > 3129,
> > > in wait_for_live_migration*
> > > *2013-09-02 15:35:45.790 4601 TRACE nova.utils     migrate_data)*
> > > *2013-09-02 15:35:45.790 4601 TRACE nova.utils   File
> > > "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 3208,
> in
> > > _post_live_migration*
> > > *2013-09-02 15:35:45.790 4601 TRACE nova.utils     migration)*
> > > *2013-09-02 15:35:45.790 4601 TRACE nova.utils   File
> > > "/usr/lib/python2.7/dist-packages/nova/conductor/api.py", line 664, in
> > > network_migrate_instance_start*
> > > *2013-09-02 15:35:45.790 4601 TRACE nova.utils     migration)*
> > > *2013-09-02 15:35:45.790 4601 TRACE nova.utils   File
> > > "/usr/lib/python2.7/dist-packages/nova/conductor/rpcapi.py", line 415,
> in
> > > network_migrate_instance_start*
> > > *2013-09-02 15:35:45.790 4601 TRACE nova.utils     return
> > > self.call(context, msg, version='1.41')*
> > > *2013-09-02 15:35:45.790 4601 TRACE nova.utils   File
> > > "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/proxy.py",
> > line
> > > 80, in call*
> > > *2013-09-02 15:35:45.790 4601 TRACE nova.utils     return
> > > rpc.call(context, self._get_topic(topic), msg, timeout)*
> > > *2013-09-02 15:35:45.790 4601 TRACE nova.utils   File
> > >
> "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/__init__.py",
> > > line 140, in call*
> > > *2013-09-02 15:35:45.790 4601 TRACE nova.utils     return
> > > _get_impl().call(CONF, context, topic, msg, timeout)*
> > > *2013-09-02 15:35:45.790 4601 TRACE nova.utils   File
> > >
> >
> "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/impl_kombu.py",
> > > line 798, in call*
> > > *2013-09-02 15:35:45.790 4601 TRACE nova.utils
> > > rpc_amqp.get_connection_pool(conf, Connection))*
> > > *2013-09-02 15:35:45.790 4601 TRACE nova.utils   File
> > > "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py",
> > line
> > > 612, in call*
> > > *2013-09-02 15:35:45.790 4601 TRACE nova.utils     rv = list(rv)*
> > > *2013-09-02 15:35:45.790 4601 TRACE nova.utils   File
> > > "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py",
> > line
> > > 554, in __iter__*
> > > *2013-09-02 15:35:45.790 4601 TRACE nova.utils     self.done()*
> > > *2013-09-02 15:35:45.790 4601 TRACE nova.utils   File
> > > "/usr/lib/python2.7/contextlib.py", line 24, in __exit__*
> > > *2013-09-02 15:35:45.790 4601 TRACE nova.utils     self.gen.next()*
> > > *2013-09-02 15:35:45.790 4601 TRACE nova.utils   File
> > > "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py",
> > line
> > > 551, in __iter__*
> > > *2013-09-02 15:35:45.790 4601 TRACE nova.utils
> self._iterator.next()*
> > > *2013-09-02 15:35:45.790 4601 TRACE nova.utils   File
> > >
> >
> "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/impl_kombu.py",
> > > line 648, in iterconsume*
> > > *2013-09-02 15:35:45.790 4601 TRACE nova.utils     yield
> > > self.ensure(_error_callback, _consume)*
> > > *2013-09-02 15:35:45.790 4601 TRACE nova.utils   File
> > >
> >
> "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/impl_kombu.py",
> > > line 566, in ensure*
> > > *2013-09-02 15:35:45.790 4601 TRACE nova.utils     error_callback(e)*
> > > *2013-09-02 15:35:45.790 4601 TRACE nova.utils   File
> > >
> >
> "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/impl_kombu.py",
> > > line 629, in _error_callback*
> > > *2013-09-02 15:35:45.790 4601 TRACE nova.utils     raise
> > > rpc_common.Timeout()*
> > > *2013-09-02 15:35:45.790 4601 TRACE nova.utils Timeout: Timeout while
> > > waiting on RPC response.*
> > > *2013-09-02 15:35:45.790 4601 TRACE nova.utils*
> > >
> > >
> > > -And the VM state never changes back to ACTIVE from MIGRATING:
> > >
> > >
> > > *root at cebolla:~/tool# nova --os-username=noc-admin
> --os-tenant-name=noc
> > > --os-password=XXXXX --os-auth-url http://172.19.136.1:35357/v2.0 show
> > > de2bcbed-f7b6-40cd-89ca-acf6fe2f2d09*
> > > *
> > >
> >
> +-------------------------------------+-----------------------------------------------------------+
> > > *
> > > *| Property                            | Value
> > >                           |*
> > > *
> > >
> >
> +-------------------------------------+-----------------------------------------------------------+
> > > *
> > > *| status                              | MIGRATING
> > >                           |*
> > > *| updated                             | 2013-09-02T15:33:54Z
> > >                          |*
> > > *| OS-EXT-STS:task_state               | migrating
> > >                           |*
> > > *| OS-EXT-SRV-ATTR:host                | acelga
> > >                          |*
> > > *| key_name                            | None
> > >                          |*
> > > *| image                               | Ubuntu 12.04.2 LTS
> > > (1359ca8d-23a2-40e8-940f-d90b3e68bb39) |*
> > > *| vlan1 network                       | 172.16.16.175
> > >                           |*
> > > *| hostId                              |
> > > 81be94870821e17e327d92e9c80548ffcdd37d24054a235116669f53  |*
> > > *| OS-EXT-STS:vm_state                 | active
> > >                          |*
> > > *| OS-EXT-SRV-ATTR:instance_name       | instance-00000022
> > >                           |*
> > > *| OS-EXT-SRV-ATTR:hypervisor_hostname | acelga.psi.unc.edu.ar
> > >                           |*
> > > *| flavor                              | m1.tiny (1)
> > >                           |*
> > > *| id                                  |
> > > de2bcbed-f7b6-40cd-89ca-acf6fe2f2d09                      |*
> > > *| security_groups                     | [{u'name': u'default'}]
> > >                           |*
> > > *| user_id                             |
> 20390b639d4449c18926dca5e038ec5e
> > >                          |*
> > > *| name                                | p9
> > >                          |*
> > > *| created                             | 2013-09-02T15:27:06Z
> > >                          |*
> > > *| tenant_id                           |
> d1e3aae242f14c488d2225dcbf1e96d6
> > >                          |*
> > > *| OS-DCF:diskConfig                   | MANUAL
> > >                          |*
> > > *| metadata                            | {}
> > >                          |*
> > > *| accessIPv4                          |
> > >                           |*
> > > *| accessIPv6                          |
> > >                           |*
> > > *| OS-EXT-STS:power_state              | 1
> > >                           |*
> > > *| OS-EXT-AZ:availability_zone         | nova
> > >                          |*
> > > *| config_drive                        |
> > >                           |*
> > > *
> > >
> >
> +-------------------------------------+-----------------------------------------------------------+
> > > *
> > > *root at cebolla:~/tool#*
> > >
> > >
> > > Funny fact:
> > > -The vm still answer ping after migration, so i think this is good.
> > >
> > > Any ideas about this problem? At first i thought it could be related
> to a
> > > connection problem between the nodes, but the VM migrates completly in
> > > hipervisor level somehow there is some "instance've been migrated ACK"
> > > missing.
> > >
> > >
> > > --
> > > Pavlik Salles Juan Jos?
> > >
> >
> >
> >
> > --
> > Pavlik Salles Juan Jos?
> > -------------- next part --------------
> > An HTML attachment was scrubbed...
> > URL: <
> >
> http://lists.openstack.org/pipermail/openstack-operators/attachments/20130902/d8ca74f2/attachment.html
> > >
> >
> > ------------------------------
> >
> > _______________________________________________
> > OpenStack-operators mailing list
> > OpenStack-operators at lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> >
> >
> > End of OpenStack-operators Digest, Vol 35, Issue 1
> > **************************************************
> >
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> http://lists.openstack.org/pipermail/openstack-operators/attachments/20130903/1495e47f/attachment-0001.html
> >
>
> ------------------------------
>
> Message: 2
> Date: Mon, 2 Sep 2013 23:24:19 -0400
> From: Lorin Hochstein <lorin at nimbisservices.com>
> To: yungho <yungho5054 at gmail.com>
> Cc: "openstack-operators at lists.openstack.org"
>         <openstack-operators at lists.openstack.org>
> Subject: Re: [Openstack-operators] OpenStack-operators Digest, Vol 35,
>         Issue 1
> Message-ID:
>         <
> CADzpNMUDv9zGMDYixrRi0J6HMPam136Uxqqq96snr7EtKp92xA at mail.gmail.com>
> Content-Type: text/plain; charset="iso-8859-1"
>
> Youngho:
>
> On Mon, Sep 2, 2013 at 10:51 PM, yungho <yungho5054 at gmail.com> wrote:
>
> > Hello,
> >        I CentOs6.4 environment to deploy OpenStack G version, Controller
> > Node, Network Node, Compute Node. Networks using Quantum, but is still on
> > the Quantum network model do not know, and want to know under what
> > circumstances the use of gre, under what circumstances the vlan mode.
> >
> >
> Do you have administrative privileges on the networking switch that
> connects your compute hosts together? If so, I would recommend using vlan
> mode. You will need to configure your switch appropriately.
>
> If you aren't able to modify the settings on your switch, or if your nodes
> are not all connected to the same L2 network, then you will need to use GRE
> tunnels.
>
> I believe that GRE will work for pretty much all scenarios (since it just
> requires IP connectivity), but I think it introduces some overhead.
> However, I have no first-hand experience with it.
>
> Take care,
>
> Lorin
> --
> Lorin Hochstein
> Lead Architect - Cloud Services
> Nimbis Services, Inc.
> www.nimbisservices.com
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> http://lists.openstack.org/pipermail/openstack-operators/attachments/20130902/e680b107/attachment-0001.html
> >
>
> ------------------------------
>
> Message: 3
> Date: Tue, 3 Sep 2013 10:18:39 +0200
> From: Simon Pasquier <simon.pasquier at bull.net>
> To: <openstack-operators at lists.openstack.org>
> Subject: Re: [Openstack-operators] Running DHCP agent in HA (grizzly)
> Message-ID: <52259B5F.8050407 at bull.net>
> Content-Type: text/plain; charset="ISO-8859-1"; format=flowed
>
> Re-transfer to the list as it was sent to myself only
>
>
> -------- Message original --------
> Sujet:  Re: [Openstack-operators] Running DHCP agent in HA (grizzly)
> Date :  Mon, 2 Sep 2013 11:02:53 -0500
> De :    David Wittman <dwittman at gmail.com>
> Pour :  Simon Pasquier <simon.pasquier at bull.net>
>
>
>
> Robert,
>
> As Simon mentioned, this isn't a feature in Grizzly yet. For now, you
> can achieve similar functionality by calling the quantum-ha-tool from
> stackforge[1] at a regular interval. More specifically, you're looking
> for the `--replicate-dhcp` option.
>
> [1]:
>
> https://github.com/stackforge/cookbook-openstack-network/blob/master/files/default/quantum-ha-tool.py
>
> Dave
>
>
> On Mon, Sep 2, 2013 at 10:09 AM, Simon Pasquier <simon.pasquier at bull.net
> <mailto:simon.pasquier at bull.net>> wrote:
>
>      Hello,
>      I guess you are running Grizzly. It requires Havana to be able to
>      schedule automatically multiple DHCP agents per network.
>      Simon
>
>      Le 02/09/2013 16:41, Robert van Leeuwen a ?crit :
>
>          Hi,
>
>          How would one make sure multiple dhcp agents run for the same
>          segment?
>          I currently have 2 dhcp agents running but only one is added to
>          a network when the network is created.
>
>          When I manually add the other one with "quantum
>            dhcp-agent-network-add" but I would like this to happen
>          automatically.
>
>          Thx,
>          Robert
>
>
>          _________________________________________________
>          OpenStack-operators mailing list
>          OpenStack-operators at lists.__openstack.org
>          <mailto:OpenStack-operators at lists.openstack.org>
>
> http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-operators
>
> <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators>
>
>
>
>      --
>      Simon Pasquier
>      Software Engineer
>      Bull, Architect of an Open World
>      Phone: + 33 4 76 29 71 49 <tel:%2B%2033%204%2076%2029%2071%2049>
>      http://www.bull.com
>
>
>      _________________________________________________
>      OpenStack-operators mailing list
>      OpenStack-operators at lists.__openstack.org
>      <mailto:OpenStack-operators at lists.openstack.org>
>
> http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-operators
>
> <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators>
>
>
>
>
>
>
>
> ------------------------------
>
> _______________________________________________
> OpenStack-operators mailing list
> OpenStack-operators at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
> End of OpenStack-operators Digest, Vol 35, Issue 2
> **************************************************
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20130903/4dd233e7/attachment-0001.html>


More information about the OpenStack-operators mailing list