[Openstack-operators] Openstack-operators Digest, Vol 19, Issue 59

Christian Parpart trapni at gmail.com
Wed May 30 14:55:57 UTC 2012


On Wed, May 30, 2012 at 4:35 PM, George Mihaiescu
<George.Mihaiescu at q9.com>wrote:

> All the flags are very well documented (but they are a many :)
>
>
> http://docs.openstack.org/trunk/openstack-compute/admin/content/compute-options-reference.html


Hey,

I knew this page, and while browsing over it, I did not pay that much
attention to absolutely every line,
especially where the word "metadata" confused me a bit, and did not hint
me, that it actually
meant the "nova-api"-service but some kind of I-don't-know-metadata-service
:-)

Although, I find the few words in that (and other) line(s) a bit too... few
- from a newbie-user's point of view.

All I wanted to propose, is, to highlight what to care about in a
multi-node setup a bit more, just like
everyone talks about that the controller node is kind of central and at
least contains
the nova-scheduler service, maybe nova-cert, nova-objectstore, but I found
no document so far in really
highlighting waht services are meant to run where, and how to configure
their IPs (just like "metadata"
was not clear enough to me to be equal to "nova-api").

Best regards,
Christian Parpart.

>
>
> George
>
> -----Original Message-----
> From: openstack-operators-bounces at lists.openstack.org [mailto:
> openstack-operators-bounces at lists.openstack.org] On Behalf Of
> openstack-operators-request at lists.openstack.org
> Sent: Wednesday, May 30, 2012 8:00 AM
> To: openstack-operators at lists.openstack.org
> Subject: Openstack-operators Digest, Vol 19, Issue 59
>
> Send Openstack-operators mailing list submissions to
>        openstack-operators at lists.openstack.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
> or, via email, send a message with subject or body 'help' to
>        openstack-operators-request at lists.openstack.org
>
> You can reach the person managing the list at
>        openstack-operators-owner at lists.openstack.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of Openstack-operators digest..."
>
>
> Today's Topics:
>
>   1. Re: cloud-init:169.254.169.254 to time out / refuse
>      connections (Dan Wendlandt)
>   2. Re: cloud-init:169.254.169.254 to time out / refuse
>      connections (Christian Parpart)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Tue, 29 May 2012 17:06:20 -0700
> From: Dan Wendlandt <dan at nicira.com>
> To: Christian Parpart <trapni at gmail.com>
> Cc: openstack-operators at lists.openstack.org
> Subject: Re: [Openstack-operators] cloud-init:169.254.169.254 to time
>        out / refuse connections
> Message-ID:
>        <CA+0XJm-yRVMOZocN1nmvCHztEA4QhgF1WWd3s-JbXiXUmi=Tnw at mail.gmail.com
> >
> Content-Type: text/plain; charset="iso-8859-1"
>
> the flag metadata_host (in nova/flags.py) defaults to the IP address of the
> localhost, so nova-network will DNAT to its own IP unless you override
> metadata_host in your nova.conf
>
> Dan
>
> On Tue, May 29, 2012 at 4:28 PM, Christian Parpart <trapni at gmail.com>
> wrote:
>
> > On Tue, May 29, 2012 at 2:47 PM, Christian Parpart <trapni at gmail.com
> >wrote:
> >
> >> Hey all,
> >>
> >> This 169.254.169.254 is driving me crazy. I read a few things already
> >> about that suspcisious IP address, however,
> >> I always get either a few:
> >>
> >> 2012-05-29 12:22:40,831 - util.py[WARNING]: '
> >> http://169.254.169.254/2009-04-04/meta-data/instance-id' failed
> >> [50/120s]: url error [timed out]
> >>
> >> or I'll get tons of:
> >>
> >> 2012-05-29 12:19:38,049 - util.py[WARNING]: '
> >> http://169.254.169.254/2009-04-04/meta-data/instance-id' failed
> >> [113/120s]: url error [[Errno 111] Connection refused]
> >>
> >> when instantiating a new VM.
> >>
> >> My setup is as follows:
> >>
> >> "production" network: 10.10.40.0/21
> >>  management network (physical nodes, switches, PDUs, ...) 10.10.0.0/19
> >>
> >> nova-network: (we're not in multi_host mode)
> >> - eth0: 10.10.30.4
> >>
> >> controller (api, scheduler, etc, also compute-1 node):
> >> - eth0: 10.10.30.190
> >>
> >> compute-2:
> >> - eth0: 10.10.30.191
> >>
> >> compute-3:
> >> - eth0: 10.10.30.192
> >>
> >> Now, since the 169.254.169.254 is just an artificial IP, to be NAT'ed to
> >> the right host via iptables, I did a quick check,
> >> and tcp/80 seems to redirect to the nova-api instance at port 8775.
> >>
> >> So here's my question:
> >> On which physical nodes is this iptables rule expected, Just
> nova-network
> >> or on every compute node? (and how to fix my above situation?)
> >>
> >> I'm asking because I found the DNAT rule on the dedicated network node
> >> but also compute-1 node (which is also the controller node, with api,
> >> scheduler, etc) but not on compute-3 nor on compute-3 node - regardless
> of
> >> my issue, this doesn't feel right.
> >>
> >
> > Hey,
> >
> > for the latter case (ECONNREFUSED) I believe I have an answer, but not
> why
> > it is set up this way:
> >
> > root at nova-network-node:/etc/nova# iptables -t nat -L -vn | grep -n3
> > 169.254.169.254
> > 26-
> > 27-Chain nova-network-PREROUTING (1 references)
> > 28- pkts bytes target     prot opt in     out     source
> > destination
> > 29:   33  1980 DNAT       tcp  --  *      *       0.0.0.0/0
> >  169.254.169.254      tcp dpt:80 to:10.10.40.1:8775
> > 30-    0     0 DNAT       udp  --  *      *       0.0.0.0/0
> >  10.10.40.1           udp dpt:1000 to:10.10.40.2:1194
> > 31-
> >
> > This shows, that the suspicious IP address is routed to 10.10.40.1:8875where
> this IP
> > is the host itself and not the nova-api node's IP.
> >
> > AFAIK nova-api is just to be installed onto a single node, that is, the
> > controller node, so I wonder
> > why nova-network seems to create a DNAT rule for nova-api to its own host
> > instead to the cloud controller's IP.
> >
> > I checked my nova.conf, and while there is no direct entry for what IP to
> > use for node-api, I at least
> > see, that cc_host is set to the proper IP (10.10.30.190).
> >
> > So long,
> > Christian Parpart.
> >
> >
> > _______________________________________________
> > Openstack-operators mailing list
> > Openstack-operators at lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> >
> >
>
>
> --
> ~~~~~~~~~~~~~~~~~~~~~~~~~~~
> Dan Wendlandt
> Nicira, Inc: www.nicira.com
> twitter: danwendlandt
> ~~~~~~~~~~~~~~~~~~~~~~~~~~~
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> http://lists.openstack.org/pipermail/openstack-operators/attachments/20120529/1ed8c810/attachment-0001.html
> >
>
> ------------------------------
>
> Message: 2
> Date: Wed, 30 May 2012 09:14:38 +0200
> From: Christian Parpart <trapni at gmail.com>
> To: Dan Wendlandt <dan at nicira.com>
> Cc: openstack-operators at lists.openstack.org
> Subject: Re: [Openstack-operators] cloud-init:169.254.169.254 to time
>        out / refuse connections
> Message-ID:
>        <CA+qvzFM0Wv=iAFBSDSZ86BMkLW-M5f4wULRtDAWc4LP8sELT3A at mail.gmail.com
> >
> Content-Type: text/plain; charset="iso-8859-1"
>
> We should improve the docs regarding multi host setups and this flag to
> explicitely state that.
>
> I found the solution by accident and out of my curiosity. :-)
>
> Regards,
> Christian Parpart.
> Am 30.05.2012 02:06 schrieb "Dan Wendlandt" <dan at nicira.com>:
>
> > the flag metadata_host (in nova/flags.py) defaults to the IP address of
> > the localhost, so nova-network will DNAT to its own IP unless you
> override
> > metadata_host in your nova.conf
> >
> > Dan
> >
> > On Tue, May 29, 2012 at 4:28 PM, Christian Parpart <trapni at gmail.com
> >wrote:
> >
> >> On Tue, May 29, 2012 at 2:47 PM, Christian Parpart <trapni at gmail.com
> >wrote:
> >>
> >>> Hey all,
> >>>
> >>> This 169.254.169.254 is driving me crazy. I read a few things already
> >>> about that suspcisious IP address, however,
> >>> I always get either a few:
> >>>
> >>> 2012-05-29 12:22:40,831 - util.py[WARNING]: '
> >>> http://169.254.169.254/2009-04-04/meta-data/instance-id' failed
> >>> [50/120s]: url error [timed out]
> >>>
> >>> or I'll get tons of:
> >>>
> >>> 2012-05-29 12:19:38,049 - util.py[WARNING]: '
> >>> http://169.254.169.254/2009-04-04/meta-data/instance-id' failed
> >>> [113/120s]: url error [[Errno 111] Connection refused]
> >>>
> >>> when instantiating a new VM.
> >>>
> >>> My setup is as follows:
> >>>
> >>> "production" network: 10.10.40.0/21
> >>>  management network (physical nodes, switches, PDUs, ...) 10.10.0.0/19
> >>>
> >>> nova-network: (we're not in multi_host mode)
> >>> - eth0: 10.10.30.4
> >>>
> >>> controller (api, scheduler, etc, also compute-1 node):
> >>> - eth0: 10.10.30.190
> >>>
> >>> compute-2:
> >>> - eth0: 10.10.30.191
> >>>
> >>> compute-3:
> >>> - eth0: 10.10.30.192
> >>>
> >>> Now, since the 169.254.169.254 is just an artificial IP, to be NAT'ed
> to
> >>> the right host via iptables, I did a quick check,
> >>> and tcp/80 seems to redirect to the nova-api instance at port 8775.
> >>>
> >>> So here's my question:
> >>> On which physical nodes is this iptables rule expected, Just
> >>> nova-network or on every compute node? (and how to fix my above
> situation?)
> >>>
> >>> I'm asking because I found the DNAT rule on the dedicated network node
> >>> but also compute-1 node (which is also the controller node, with api,
> >>> scheduler, etc) but not on compute-3 nor on compute-3 node -
> regardless of
> >>> my issue, this doesn't feel right.
> >>>
> >>
> >> Hey,
> >>
> >> for the latter case (ECONNREFUSED) I believe I have an answer, but not
> >> why it is set up this way:
> >>
> >> root at nova-network-node:/etc/nova# iptables -t nat -L -vn | grep -n3
> >> 169.254.169.254
> >> 26-
> >> 27-Chain nova-network-PREROUTING (1 references)
> >> 28- pkts bytes target     prot opt in     out     source
> >> destination
> >> 29:   33  1980 DNAT       tcp  --  *      *       0.0.0.0/0
> >>  169.254.169.254      tcp dpt:80 to:10.10.40.1:8775
> >> 30-    0     0 DNAT       udp  --  *      *       0.0.0.0/0
> >>  10.10.40.1           udp dpt:1000 to:10.10.40.2:1194
> >> 31-
> >>
> >> This shows, that the suspicious IP address is routed to 10.10.40.1:8875where
> this IP
> >> is the host itself and not the nova-api node's IP.
> >>
> >> AFAIK nova-api is just to be installed onto a single node, that is, the
> >> controller node, so I wonder
> >> why nova-network seems to create a DNAT rule for nova-api to its own
> host
> >> instead to the cloud controller's IP.
> >>
> >> I checked my nova.conf, and while there is no direct entry for what IP
> to
> >> use for node-api, I at least
> >> see, that cc_host is set to the proper IP (10.10.30.190).
> >>
> >> So long,
> >> Christian Parpart.
> >>
> >>
> >> _______________________________________________
> >> Openstack-operators mailing list
> >> Openstack-operators at lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> >>
> >>
> >
> >
> > --
> > ~~~~~~~~~~~~~~~~~~~~~~~~~~~
> > Dan Wendlandt
> > Nicira, Inc: www.nicira.com
> > twitter: danwendlandt
> > ~~~~~~~~~~~~~~~~~~~~~~~~~~~
> >
> >
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> http://lists.openstack.org/pipermail/openstack-operators/attachments/20120530/84c9ac61/attachment-0001.html
> >
>
> ------------------------------
>
> _______________________________________________
> Openstack-operators mailing list
> Openstack-operators at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
> End of Openstack-operators Digest, Vol 19, Issue 59
> ***************************************************
> _______________________________________________
> Openstack-operators mailing list
> Openstack-operators at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20120530/56b33ce7/attachment-0002.html>


More information about the Openstack-operators mailing list