[Openstack-operators] [openstack-dev] [openstack-operators][neutron[dhcp][dnsmask]: duplicate entries in addn_hosts causing no IP allocation

Neil Jerram Neil.Jerram at metaswitch.com
Wed Jul 1 11:01:23 UTC 2015


Hi Dani,

I think that would be fine, if it worked.  The <packagename> that you 
want is dnsmasq-base, I believe.

However, I would not expect it to work, on a Fuel 5.1 node, because I 
believe such nodes are set up to use the Fuel master as their package 
repository, and I don't think that a Fuel 5.1 master will have any newer 
dnsmasq packages that what you already have installed.

I hope that makes sense - happy to explain further if not.

	Neil


On 01/07/15 10:24, Daniel Comnea wrote:
> Neil, much thanks !!!
>
> Any idea if i can go and only run apt-get --only-upgrade install
> <packagename>  or that will be too crazy?
>
> Cheers,
> Dani
>
>
> On Wed, Jul 1, 2015 at 9:23 AM, Neil Jerram <Neil.Jerram at metaswitch.com
> <mailto:Neil.Jerram at metaswitch.com>> wrote:
>
>     Well, the bug discussion seems to point specifically to this dnsmasq
>     fix:
>
>     http://thekelleys.org.uk/gitweb/?p=dnsmasq.git;a=commit;h=9380ba70d67db6b69f817d8e318de5ba1e990b12
>
>              Neil
>
>
>     On 01/07/15 07:34, Daniel Comnea wrote:
>
>         Hi,
>
>         sorry for no feedback, i've been doing more and more test and after
>         enabled the dnsmasq log i found the error which i'm not longer
>         sure if
>         is related to having duplicated entries
>
>         dnsmasq-dhcp[21231]: 0 DHCPRELEASE(tap8ecf66b6-72) 192.168.111.24
>         fa:16:3e:72:04:82 unknown lease
>
>         Looking around it seems i'm hitting this bug [1] but not clear
>         from the
>         description what was the problem on dnsmasp 2.59 (which comes
>         wiht Fuel 5.1)
>
>         Any ideas?
>
>         Cheers,
>         Dani
>
>         [1] https://bugs.launchpad.net/neutron/+bug/1271344
>
>         On Wed, Jun 10, 2015 at 7:13 AM, Daniel Comnea
>         <comnea.dani at gmail.com <mailto:comnea.dani at gmail.com>
>         <mailto:comnea.dani at gmail.com <mailto:comnea.dani at gmail.com>>>
>         wrote:
>
>              Thanks a bunch Kevin!
>
>              I'll try this patch and report back.
>
>              Dani
>
>
>              On Tue, Jun 9, 2015 at 2:50 AM, Kevin Benton
>         <blak111 at gmail.com <mailto:blak111 at gmail.com>
>              <mailto:blak111 at gmail.com <mailto:blak111 at gmail.com>>> wrote:
>
>                  Hi Daniel,
>
>                  I'm concerned that we are encountered out-of-order port
>         events
>                  on the DHCP agent side so the delete message is
>         processed before
>                  the create message. Would you be willing to apply a
>         small patch
>                  to your dhcp agent to see if it fixes the issue?
>
>                  If it does fix the issue, you should see occasional
>         warnings in
>                  the DHCP agent log that show "Received message for port
>         that was
>                  already deleted". If it doesn't fix the issue, we may
>         be losing
>                  the delete event entirely. If that's the case, it would
>         be great
>                  if you can enable debuging on the agent and upload a
>         log of a
>                  run when it happens.
>
>                  Cheers,
>                  Kevin Benton
>
>                  Here is the patch:
>
>                  diff --git a/neutron/agent/dhcp_agent.py
>                  b/neutron/agent/dhcp_agent.py
>                  index 71c9709..9b9b637 100644
>                  --- a/neutron/agent/dhcp_agent.py
>                  +++ b/neutron/agent/dhcp_agent.py
>                  @@ -71,6 +71,7 @@ class DhcpAgent(manager.Manager):
>                            self.needs_resync = False
>                            self.conf = cfg.CONF
>                            self.cache = NetworkCache()
>                  +        self.deleted_ports = set()
>                            self.root_helper =
>         config.get_root_helper(self.conf)
>                            self.dhcp_driver_cls =
>                  importutils.import_class(self.conf.dhcp_driver)
>                            ctx = context.get_admin_context_without_session()
>                  @@ -151,6 +152,7 @@ class DhcpAgent(manager.Manager):
>                            LOG.info(_('Synchronizing state'))
>                            pool =
>         eventlet.GreenPool(cfg.CONF.num_sync_threads)
>                            known_network_ids =
>         set(self.cache.get_network_ids())
>                  +        self.deleted_ports = set()
>
>                            try:
>                                active_networks =
>                  self.plugin_rpc.get_active_networks_info()
>                  @@ -302,6 +304,10 @@ class DhcpAgent(manager.Manager):
>                        @utils.synchronized('dhcp-agent')
>                        def port_update_end(self, context, payload):
>                            """Handle the port.update.end notification
>         event."""
>                  +        if payload['port']['id'] in self.deleted_ports:
>                  +            LOG.warning(_("Received message for port
>         that was "
>                  +                          "already deleted: %s"),
>                  payload['port']['id'])
>                  +            return
>                            updated_port = dhcp.DictModel(payload['port'])
>                            network =
>                  self.cache.get_network_by_id(updated_port.network_id)
>                            if network:
>                  @@ -315,6 +321,7 @@ class DhcpAgent(manager.Manager):
>                        def port_delete_end(self, context, payload):
>                            """Handle the port.delete.end notification
>         event."""
>                            port =
>         self.cache.get_port_by_id(payload['port_id'])
>                  +        self.deleted_ports.add(payload['port_id'])
>                            if port:
>                                network =
>                  self.cache.get_network_by_id(port.network_id)
>                                self.cache.remove_port(port)
>
>
>
>
>
>
>
>
>                  On Mon, Jun 8, 2015 at 8:26 AM, Daniel Comnea
>                  <comnea.dani at gmail.com <mailto:comnea.dani at gmail.com>
>         <mailto:comnea.dani at gmail.com <mailto:comnea.dani at gmail.com>>>
>         wrote:
>
>                      Any help, ideas please?
>
>                      Thx,
>                      Dani
>
>                      On Mon, Jun 8, 2015 at 9:25 AM, Daniel Comnea
>                      <comnea.dani at gmail.com
>         <mailto:comnea.dani at gmail.com> <mailto:comnea.dani at gmail.com
>         <mailto:comnea.dani at gmail.com>>> wrote:
>
>                          + Operators
>
>                          Much thanks in advance,
>                          Dani
>
>
>
>
>                          On Sun, Jun 7, 2015 at 6:31 PM, Daniel Comnea
>                          <comnea.dani at gmail.com
>         <mailto:comnea.dani at gmail.com> <mailto:comnea.dani at gmail.com
>         <mailto:comnea.dani at gmail.com>>>
>                          wrote:
>
>                              Hi all,
>
>                              I'm running IceHouse (build using Fuel
>         5.1.1) on
>                              Ubuntu where dnsmask version 2.59-4.
>                              I have a very basic network layout where i
>         have a
>                              private net which has 2 subnets
>
>                                2fb7de9d-d6df-481f-acca-2f7860cffa60 |
>                              private-net
>                 |
>                              e79c3477-d3e5-471c-a728-8d881cf31bee
>         192.168.110.0/24 <http://192.168.110.0/24>
>         <http://192.168.110.0/24> |
>                              |
>                              |
>                                                    |
>                              f48c3223-8507-455c-9c13-8b727ea5f441
>         192.168.111.0/24 <http://192.168.111.0/24>
>         <http://192.168.111.0/24> |
>
>                              and i'm creating VMs via HEAT.
>                              What is happening is that sometimes i get
>         duplicated
>                              entries in [1] and because of that the VM
>         which was
>                              spun up doesn't get an ip.
>                              The Dnsmask processes are running okay [2]
>         and i
>                              can't see anything special/ wrong in it.
>
>                              Any idea why this is happening? Or are you
>         aware of
>                              any bugs around this area? Do you see a
>         problems
>                              with having 2 subnets mapped to 1 private-net?
>
>
>
>                              Thanks,
>                              Dani
>
>                              [1]
>
>         /var/lib/neutron/dhcp/2fb7de9d-d6df-481f-acca-2f7860cffa60/addn_hosts
>
>                              [2]
>
>                              nobody    5664     1  0 Jun02 ?        00:00:08
>                              dnsmasq --no-hosts --no-resolv --strict-order
>                              --bind-interfaces --interface=tapc9164734-0c
>                              --except-interface=lo
>
>         --pid-file=/var/lib/neutron/dhcp/2fb7de9d-d6df-481f-acca-2f7860cffa60/pid
>
>         --dhcp-hostsfile=/var/lib/neutron/dhcp/2fb7de9d-d6df-481f-acca-2f7860cffa60/host
>
>         --addn-hosts=/var/lib/neutron/dhcp/2fb7de9d-d6df-481f-acca-2f7860cffa60/addn_hosts
>
>         --dhcp-optsfile=/var/lib/neutron/dhcp/2fb7de9d-d6df-481f-acca-2f7860cffa60/opts
>                              --leasefile-ro --dhcp-authoritative
>
>         --dhcp-range=set:tag0,192.168.110.0,static,86400s
>
>         --dhcp-range=set:tag1,192.168.111.0,static,86400s
>                              --dhcp-lease-max=512 --conf-file=
>         --server=10.0.0.31
>                              --server=10.0.0.32 --domain=openstacklocal
>
>
>
>
>                      _______________________________________________
>                      OpenStack-operators mailing list
>         OpenStack-operators at lists.openstack.org
>         <mailto:OpenStack-operators at lists.openstack.org>
>                      <mailto:OpenStack-operators at lists.openstack.org
>         <mailto:OpenStack-operators at lists.openstack.org>>
>         http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
>
>
>                  --
>                  Kevin Benton
>
>
>
>
>
>         __________________________________________________________________________
>         OpenStack Development Mailing List (not for usage questions)
>         Unsubscribe:
>         OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>         <http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe>
>         http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>



More information about the OpenStack-operators mailing list