[Openstack-operators] the url for moderator request

Edgar Magana edgar.magana at workday.com
Fri Oct 14 18:05:55 UTC 2016


Patricia,

I think we are announcing which session we would like to moderate and add our names in the respective etherpad.

Thanks,

Edgar

From: Patricia Dugan <patricia.dugan at oneops.com>
Date: Thursday, October 13, 2016 at 7:33 AM
To: "openstack-operators at lists.openstack.org" <openstack-operators at lists.openstack.org>
Subject: [Openstack-operators] the url for moderator request

Is this the url for the moderator request: https://etherpad.openstack.org/p/BCN-ops-meetup<https://urldefense.proofpoint.com/v2/url?u=https-3A__etherpad.openstack.org_p_BCN-2Dops-2Dmeetup&d=DQMFAg&c=DS6PUFBBr_KiLo7Sjt3ljp5jaW5k2i9ijVXllEdOozc&r=G0XRJfDQsuBvqa_wpWyDAUlSpeMV4W1qfWqBfctlWwQ&m=Fs7jQo7W1CxDalmICmt6lRtEZvxmw6q3UnRqFVtyvEA&s=SiPIzqCxxWaBLn3wFW-cb8NPtKKvKby5nGzI8bAs7jY&e=> << at the bottom, is that where we are supposed to fill out?


On Oct 13, 2016, at 12:19 AM, openstack-operators-request at lists.openstack.org<mailto:openstack-operators-request at lists.openstack.org> wrote:

Send OpenStack-operators mailing list submissions to
openstack-operators at lists.openstack.org<mailto:openstack-operators at lists.openstack.org>

To subscribe or unsubscribe via the World Wide Web, visit
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

or, via email, send a message with subject or body 'help' to
openstack-operators-request at lists.openstack.org

You can reach the person managing the list at
openstack-operators-owner at lists.openstack.org

When replying, please edit your Subject line so it is more specific
than "Re: Contents of OpenStack-operators digest..."


Today's Topics:

  1. Re: [openstack-operators][ceph][nova] How do you handle Nova
     on Ceph? (Adam Kijak)
  2. Ubuntu package for Octavia (Lutz Birkhahn)
  3. Re: [openstack-operators][ceph][nova] How do you handle Nova
     on Ceph? (Adam Kijak)
  4. [nova] Does anyone use the os-diagnostics API? (Matt Riedemann)
  5. OPNFV delivered its new Colorado release (Ulrich Kleber)
  6. Re: [nova] Does anyone use the os-diagnostics API? (Tim Bell)
  7. Re: [nova] Does anyone use the os-diagnostics
API? (Joe Topjian)
  8. Re: OPNFV delivered its new Colorado release (Jay Pipes)
  9. glance, nova backed by NFS (Curtis)
 10. Re: glance, nova backed by NFS (Kris G. Lindgren)
 11. Re: glance, nova backed by NFS (Tobias Sch?n)
 12. Re: glance, nova backed by NFS (Kris G. Lindgren)
 13. Re: glance, nova backed by NFS (Curtis)
 14. Re: glance, nova backed by NFS (James Penick)
 15. Re: glance, nova backed by NFS (Curtis)
 16. Re: Ubuntu package for Octavia (Xav Paice)
 17. Re: [openstack-operators][ceph][nova] How do you handle Nova
     on Ceph? (Warren Wang)
 18. Re: [openstack-operators][ceph][nova] How do
you handle Nova
     on Ceph? (Clint Byrum)
 19. Disable console for an instance (Blair Bethwaite)
 20. host maintenance (Juvonen, Tomi (Nokia - FI/Espoo))
 21. Ops at Barcelona - Call for Moderators (Tom Fifield)


----------------------------------------------------------------------

Message: 1
Date: Wed, 12 Oct 2016 12:23:41 +0000
From: Adam Kijak <adam.kijak at corp.ovh.com>
To: Xav Paice <xavpaice at gmail.com>,
"openstack-operators at lists.openstack.org"
<openstack-operators at lists.openstack.org>
Subject: Re: [Openstack-operators] [openstack-operators][ceph][nova]
How do you handle Nova on Ceph?
Message-ID: <839b3aba73394bf9aae56c801687e50c at corp.ovh.com>
Content-Type: text/plain; charset="iso-8859-1"


________________________________________
From: Xav Paice <xavpaice at gmail.com>
Sent: Monday, October 10, 2016 8:41 PM
To: openstack-operators at lists.openstack.org
Subject: Re: [Openstack-operators] [openstack-operators][ceph][nova] How do you handle Nova on Ceph?

On Mon, 2016-10-10 at 13:29 +0000, Adam Kijak wrote:

Hello,

We use a Ceph cluster for Nova (Glance and Cinder as well) and over
time,
more and more data is stored there. We can't keep the cluster so big
because of
Ceph's limitations. Sooner or later it needs to be closed for adding
new
instances, images and volumes. Not to mention it's a big failure
domain.

I'm really keen to hear more about those limitations.

Basically it's all related to the failure domain ("blast radius") and risk management.
Bigger Ceph cluster means more users.
Growing the Ceph cluster temporary slows it down, so many users will be affected.
There are bugs in Ceph which can cause data corruption. It's rare, but when it happens
it can affect many (maybe all) users of the Ceph cluster.



How do you handle this issue?
What is your strategy to divide Ceph clusters between compute nodes?
How do you solve VM snapshot placement and migration issues then
(snapshots will be left on older Ceph)?

Having played with Ceph and compute on the same hosts, I'm a big fan of
separating them and having dedicated Ceph hosts, and dedicated compute
hosts.  That allows me a lot more flexibility with hardware
configuration and maintenance, easier troubleshooting for resource
contention, and also allows scaling at different rates.

Exactly, I consider it the best practice as well.




------------------------------

Message: 2
Date: Wed, 12 Oct 2016 12:25:38 +0000
From: Lutz Birkhahn <lutz.birkhahn at noris.de>
To: "openstack-operators at lists.openstack.org"
<openstack-operators at lists.openstack.org>
Subject: [Openstack-operators] Ubuntu package for Octavia
Message-ID: <B1F091B0-61C6-4AB2-AEEC-F2F458101C11 at noris.de>
Content-Type: text/plain; charset="utf-8"

Has anyone seen Ubuntu packages for Octavia yet?

We?re running Ubuntu 16.04 with Newton, but for whatever reason I can not find any Octavia package?

So far I?ve only found in https://wiki.openstack.org/wiki/Neutron/LBaaS/HowToRun the following:

    Ubuntu Packages Setup: Install octavia with your favorite distribution: ?pip install octavia?

That was not exactly what we would like to do in our production cloud?

Thanks,

/lutz
-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/x-pkcs7-signature
Size: 6404 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20161012/7a1bed2f/attachment-0001.bin>

------------------------------

Message: 3
Date: Wed, 12 Oct 2016 12:35:48 +0000
From: Adam Kijak <adam.kijak at corp.ovh.com>
To: Abel Lopez <alopgeek at gmail.com>
Cc: openstack-operators <openstack-operators at lists.openstack.org>
Subject: Re: [Openstack-operators] [openstack-operators][ceph][nova]
How do you handle Nova on Ceph?
Message-ID: <29c531e1c0614dc1bd1cf587d69aa45b at corp.ovh.com>
Content-Type: text/plain; charset="iso-8859-1"


_______________________________________
From: Abel Lopez <alopgeek at gmail.com>
Sent: Monday, October 10, 2016 9:57 PM
To: Adam Kijak
Cc: openstack-operators
Subject: Re: [Openstack-operators] [openstack-operators][ceph][nova] How do you handle Nova on Ceph?

Have you thought about dedicated pools for cinder/nova and a separate pool for glance, and any other uses you might have?
You need to setup secrets on kvm, but you can have cinder creating volumes from glance images quickly in different pools

We already have separate pool for images, volumes and instances.
Separate pools doesn't really split the failure domain though.
Also AFAIK you can't set up multiple pools for instances in nova.conf, right?



------------------------------

Message: 4
Date: Wed, 12 Oct 2016 09:00:11 -0500
From: Matt Riedemann <mriedem at linux.vnet.ibm.com>
To: "openstack-operators at lists.openstack.org"
<openstack-operators at lists.openstack.org>
Subject: [Openstack-operators] [nova] Does anyone use the
os-diagnostics API?
Message-ID: <5dae5c89-b682-15c7-11c6-d9a5481076a4 at linux.vnet.ibm.com>
Content-Type: text/plain; charset=utf-8; format=flowed

The current form of the nova os-diagnostics API is hypervisor-specific,
which makes it pretty unusable in any generic way, which is why Tempest
doesn't test it.

Way back when the v3 API was a thing for 2 minutes there was work done
to standardize the diagnostics information across virt drivers in nova.
The only thing is we haven't exposed that out of the REST API yet, but
there is a spec proposing to do that now:

https://review.openstack.org/#/c/357884/

This is an admin-only API so we're trying to keep an end user point of
view out of discussing it. For example, the disk details don't have any
unique identifier. We could add one, but would it be useful to an admin?

This API is really supposed to be for debug, but the question I have for
this list is does anyone actually use the existing os-diagnostics API?
And if so, how do you use it, and what information is most useful? If
you are using it, please review the spec and provide any input on what's
proposed for outputs.

--

Thanks,

Matt Riedemann




------------------------------

Message: 5
Date: Wed, 12 Oct 2016 14:17:50 +0000
From: Ulrich Kleber <Ulrich.Kleber at huawei.com>
To: "openstack-operators at lists.openstack.org"
<openstack-operators at lists.openstack.org>
Subject: [Openstack-operators] OPNFV delivered its new Colorado
release
Message-ID: <884BFBB6F562F44F91BF83F77C24E9972E7EB5F0 at lhreml507-mbx>
Content-Type: text/plain; charset="iso-8859-1"

Hi,
I didn't see an official announcement, so I like to point you to the new release of OPNFV.
https://www.opnfv.org/news-faq/press-release/2016/09/open-source-nfv-project-delivers-third-platform-release-introduces-0
OPNFV is an open source project and one of the most important users of OpenStack in the Telecom/NFV area. Maybe it is interesting for your work.
Feel free to contact me or meet during the Barcelona summit at the session of the OpenStack Operators Telecom/NFV Functional Team (https://www.openstack.org/summit/barcelona-2016/summit-schedule/events/16768/openstack-operators-telecomnfv-functional-team).
Cheers,
Uli


Ulrich KLEBER
Chief Architect Cloud Platform
European Research Center
IT R&D Division
[huawei_logo]
Riesstra?e 25
80992 M?nchen
Mobile: +49 (0)173 4636144
Mobile (China): +86 13005480404



-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20161012/7dcc7cae/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image001.jpg
Type: image/jpeg
Size: 6737 bytes
Desc: image001.jpg
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20161012/7dcc7cae/attachment-0001.jpg>

------------------------------

Message: 6
Date: Wed, 12 Oct 2016 14:35:54 +0000
From: Tim Bell <Tim.Bell at cern.ch>
To: Matt Riedemann <mriedem at linux.vnet.ibm.com>
Cc: "openstack-operators at lists.openstack.org"
<openstack-operators at lists.openstack.org>
Subject: Re: [Openstack-operators] [nova] Does anyone use the
os-diagnostics API?
Message-ID: <248C8965-1ECE-4CA3-9B88-A7C75CF8B3AD at cern.ch>
Content-Type: text/plain; charset="utf-8"



On 12 Oct 2016, at 07:00, Matt Riedemann <mriedem at linux.vnet.ibm.com> wrote:

The current form of the nova os-diagnostics API is hypervisor-specific, which makes it pretty unusable in any generic way, which is why Tempest doesn't test it.

Way back when the v3 API was a thing for 2 minutes there was work done to standardize the diagnostics information across virt drivers in nova. The only thing is we haven't exposed that out of the REST API yet, but there is a spec proposing to do that now:

https://review.openstack.org/#/c/357884/

This is an admin-only API so we're trying to keep an end user point of view out of discussing it. For example, the disk details don't have any unique identifier. We could add one, but would it be useful to an admin?

This API is really supposed to be for debug, but the question I have for this list is does anyone actually use the existing os-diagnostics API? And if so, how do you use it, and what information is most useful? If you are using it, please review the spec and provide any input on what's proposed for outputs.

Matt,

Thanks for asking. We?ve used the API in the past as a way of getting the usage data out of Nova. We had problems running ceilometer at scale and this was a way of retrieving the data for our accounting reports. We created a special policy configuration to allow authorised users query this data without full admin rights.

From the look of the new spec, it would be fairly straightforward to adapt the process to use the new format as all the CPU utilisation data is there.

Tim


--

Thanks,

Matt Riedemann


_______________________________________________
OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


------------------------------

Message: 7
Date: Wed, 12 Oct 2016 08:44:14 -0600
From: Joe Topjian <joe at topjian.net>
To: "openstack-operators at lists.openstack.org"
<openstack-operators at lists.openstack.org>
Subject: Re: [Openstack-operators] [nova] Does anyone use the
os-diagnostics
API?
Message-ID:
<CA+y7hvg9V8sa-esRGK-mW=OGhiDv5+P7ak=u4zGzppmwZdu6Jg at mail.gmail.com>
Content-Type: text/plain; charset="utf-8"

Hi Matt, Tim,

Thanks for asking. We?ve used the API in the past as a way of getting the

usage data out of Nova. We had problems running ceilometer at scale and
this was a way of retrieving the data for our accounting reports. We
created a special policy configuration to allow authorised users query this
data without full admin rights.

We do this as well.



From the look of the new spec, it would be fairly straightforward to adapt
the process to use the new format as all the CPU utilisation data is there.

I agree.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20161012/dda817a4/attachment-0001.html>

------------------------------

Message: 8
Date: Wed, 12 Oct 2016 13:02:11 -0400
From: Jay Pipes <jaypipes at gmail.com>
To: openstack-operators at lists.openstack.org
Subject: Re: [Openstack-operators] OPNFV delivered its new Colorado
release
Message-ID: <fe2e53bb-6ed2-c1af-bbee-97c3220a30c2 at gmail.com>
Content-Type: text/plain; charset=windows-1252; format=flowed

On 10/12/2016 10:17 AM, Ulrich Kleber wrote:

Hi,

I didn?t see an official announcement, so I like to point you to the new
release of OPNFV.

https://www.opnfv.org/news-faq/press-release/2016/09/open-source-nfv-project-delivers-third-platform-release-introduces-0

OPNFV is an open source project and one of the most important users of
OpenStack in the Telecom/NFV area. Maybe it is interesting for your work.

Hi Ulrich,

I'm hoping you can explain to me what exactly OPNFV is producing in its
releases. I've been through a number of the Jira items linked in the
press release above and simply cannot tell what is being actually
delivered by OPNFV versus what is just something that is in an OpenStack
component or deployment.

A good example of this is the IPV6 project's Jira item here:

https://jira.opnfv.org/browse/IPVSIX-37

Which has the title of "Auto-installation of both underlay IPv6 and
overlay IPv6". The issue is marked as "Fixed" in Colorado 1.0. However,
I can't tell what code was produced in OPNFV that delivers the
auto-installation of both an underlay IPv6 and an overlay IPv6.

In short, I'm confused about what OPNFV is producing and hope to get
some insights from you.

Best,
-jay



------------------------------

Message: 9
Date: Wed, 12 Oct 2016 11:21:27 -0600
From: Curtis <serverascode at gmail.com>
To: "openstack-operators at lists.openstack.org"
<openstack-operators at lists.openstack.org>
Subject: [Openstack-operators] glance, nova backed by NFS
Message-ID:
<CAJ_JamBDSb1q9zz2oBwHRBaee5-V0tZJoRd=8yYRPnBFxy4H+g at mail.gmail.com>
Content-Type: text/plain; charset=UTF-8

Hi All,

I've never used NFS with OpenStack before. But I am now with a small
lab deployment with a few compute nodes.

Is there anything special I should do with NFS and glance and nova? I
remember there was an issue way back when of images being deleted b/c
certain components weren't aware they are on NFS. I'm guessing that
has changed but just wanted to check if there is anything specific I
should be doing configuration-wise.

I can't seem to find many examples of NFS usage...so feel free to
point me to any documentation, blog posts, etc. I may have just missed
it.

Thanks,
Curtis.



------------------------------

Message: 10
Date: Wed, 12 Oct 2016 17:58:47 +0000
From: "Kris G. Lindgren" <klindgren at godaddy.com>
To: Curtis <serverascode at gmail.com>,
"openstack-operators at lists.openstack.org"
<openstack-operators at lists.openstack.org>
Subject: Re: [Openstack-operators] glance, nova backed by NFS
Message-ID: <94226DBF-0D6F-4585-9341-37E193C5F0E6 at godaddy.com>
Content-Type: text/plain; charset="utf-8"

We don?t use shared storage at all.  But I do remember what you are talking about.  The issue is that compute nodes weren?t aware they were on shared storage, and would nuke the backing mage from shared storage, after all vm?s on *that* compute node had stopped using it. Not after all vm?s had stopped using it.

https://bugs.launchpad.net/nova/+bug/1620341 - Looks like some code to address that concern has landed  but only in trunk maybe mitaka.  Any stable releases don?t appear to be shared backing image safe.

You might be able to get around this by setting the compute image manager task to not run.  But the issue with that will be one missed compute node, and everyone will have a bad day.

___________________________________________________________________
Kris Lindgren
Senior Linux Systems Engineer
GoDaddy

On 10/12/16, 11:21 AM, "Curtis" <serverascode at gmail.com> wrote:

   Hi All,

   I've never used NFS with OpenStack before. But I am now with a small
   lab deployment with a few compute nodes.

   Is there anything special I should do with NFS and glance and nova? I
   remember there was an issue way back when of images being deleted b/c
   certain components weren't aware they are on NFS. I'm guessing that
   has changed but just wanted to check if there is anything specific I
   should be doing configuration-wise.

   I can't seem to find many examples of NFS usage...so feel free to
   point me to any documentation, blog posts, etc. I may have just missed
   it.

   Thanks,
   Curtis.

   _______________________________________________
   OpenStack-operators mailing list
   OpenStack-operators at lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators



------------------------------

Message: 11
Date: Wed, 12 Oct 2016 17:59:13 +0000
From: Tobias Sch?n <Tobias.Schon at fiberdata.se>
To: Curtis <serverascode at gmail.com>,
"openstack-operators at lists.openstack.org"
<openstack-operators at lists.openstack.org>
Subject: Re: [Openstack-operators] glance, nova backed by NFS
Message-ID: <58f74b23f1254f5886df90183092a32b at elara.ad.fiberdata.se>
Content-Type: text/plain; charset="iso-8859-1"

Hi,

We have an environment with glance and cinder using NFS.
It's important that they have the correct rights. The shares should be owned by nova on compute if mounted up on /var/lib/nova/instances
And the same for nova and glance on the controller..

It's important that you map the glance and nova up in fstab.

The cinder one is controlled by the nfsdriver.

We are running rhelosp6, Openstack Juno.

This parameter is used:
nfs_shares_config=/etc/cinder/shares-nfs.conf in the /etc/cinder/cinder.conf file and then we have specified the share in /etc/cinder/shares-nfs.conf.

chmod 0640 /etc/cinder/shares-nfs.conf

setsebool -P virt_use_nfs on
This one is important to make it work with SELinux

How up to date this is actually I don't know tbh, but it was up to date as of redhat documentation when we deployed it around 1.5y ago.

//Tobias

-----Ursprungligt meddelande-----
Fr?n: Curtis [mailto:serverascode at gmail.com]
Skickat: den 12 oktober 2016 19:21
Till: openstack-operators at lists.openstack.org
?mne: [Openstack-operators] glance, nova backed by NFS

Hi All,

I've never used NFS with OpenStack before. But I am now with a small lab deployment with a few compute nodes.

Is there anything special I should do with NFS and glance and nova? I remember there was an issue way back when of images being deleted b/c certain components weren't aware they are on NFS. I'm guessing that has changed but just wanted to check if there is anything specific I should be doing configuration-wise.

I can't seem to find many examples of NFS usage...so feel free to point me to any documentation, blog posts, etc. I may have just missed it.

Thanks,
Curtis.

_______________________________________________
OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators



------------------------------

Message: 12
Date: Wed, 12 Oct 2016 18:06:31 +0000
From: "Kris G. Lindgren" <klindgren at godaddy.com>
To: Tobias Sch?n <Tobias.Schon at fiberdata.se>, Curtis
<serverascode at gmail.com>, "openstack-operators at lists.openstack.org"
<openstack-operators at lists.openstack.org>
Subject: Re: [Openstack-operators] glance, nova backed by NFS
Message-ID: <88AE471E-81CD-4E72-935D-4390C05F5D33 at godaddy.com>
Content-Type: text/plain; charset="utf-8"

Tobias does bring up something that we have ran into before.

With NFSv3 user mapping is done by ID, so you need to ensure that all of your servers use the same UID for nova/glance.  If you are using packages/automation that do useradd?s  without the same userid its *VERY* easy to have mismatched username/uid?s across multiple boxes.

NFSv4, iirc, sends the username and the nfs server does the translation of the name to uid, so it should not have this issue.  But we have been bit by that more than once on nfsv3.


___________________________________________________________________
Kris Lindgren
Senior Linux Systems Engineer
GoDaddy

On 10/12/16, 11:59 AM, "Tobias Sch?n" <Tobias.Schon at fiberdata.se> wrote:

   Hi,

   We have an environment with glance and cinder using NFS.
   It's important that they have the correct rights. The shares should be owned by nova on compute if mounted up on /var/lib/nova/instances
   And the same for nova and glance on the controller..

   It's important that you map the glance and nova up in fstab.

   The cinder one is controlled by the nfsdriver.

   We are running rhelosp6, Openstack Juno.

   This parameter is used:
   nfs_shares_config=/etc/cinder/shares-nfs.conf in the /etc/cinder/cinder.conf file and then we have specified the share in /etc/cinder/shares-nfs.conf.

   chmod 0640 /etc/cinder/shares-nfs.conf

   setsebool -P virt_use_nfs on
   This one is important to make it work with SELinux

   How up to date this is actually I don't know tbh, but it was up to date as of redhat documentation when we deployed it around 1.5y ago.

   //Tobias

   -----Ursprungligt meddelande-----
   Fr?n: Curtis [mailto:serverascode at gmail.com]
   Skickat: den 12 oktober 2016 19:21
   Till: openstack-operators at lists.openstack.org
   ?mne: [Openstack-operators] glance, nova backed by NFS

   Hi All,

   I've never used NFS with OpenStack before. But I am now with a small lab deployment with a few compute nodes.

   Is there anything special I should do with NFS and glance and nova? I remember there was an issue way back when of images being deleted b/c certain components weren't aware they are on NFS. I'm guessing that has changed but just wanted to check if there is anything specific I should be doing configuration-wise.

   I can't seem to find many examples of NFS usage...so feel free to point me to any documentation, blog posts, etc. I may have just missed it.

   Thanks,
   Curtis.

   _______________________________________________
   OpenStack-operators mailing list
   OpenStack-operators at lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

   _______________________________________________
   OpenStack-operators mailing list
   OpenStack-operators at lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators



------------------------------

Message: 13
Date: Wed, 12 Oct 2016 12:18:40 -0600
From: Curtis <serverascode at gmail.com>
To: "Kris G. Lindgren" <klindgren at godaddy.com>
Cc: "openstack-operators at lists.openstack.org"
<openstack-operators at lists.openstack.org>
Subject: Re: [Openstack-operators] glance, nova backed by NFS
Message-ID:
<CAJ_JamDhHBm7APEWDO1HMEfm7YEb3rT7x_cOjxycRp3JHvOxHQ at mail.gmail.com>
Content-Type: text/plain; charset=UTF-8

On Wed, Oct 12, 2016 at 11:58 AM, Kris G. Lindgren
<klindgren at godaddy.com> wrote:

We don?t use shared storage at all.  But I do remember what you are talking about.  The issue is that compute nodes weren?t aware they were on shared storage, and would nuke the backing mage from shared storage, after all vm?s on *that* compute node had stopped using it. Not after all vm?s had stopped using it.

https://bugs.launchpad.net/nova/+bug/1620341 - Looks like some code to address that concern has landed  but only in trunk maybe mitaka.  Any stable releases don?t appear to be shared backing image safe.

You might be able to get around this by setting the compute image manager task to not run.  But the issue with that will be one missed compute node, and everyone will have a bad day.

Cool, thanks Kris. Exactly what I was talking about. I'm on Mitaka,
and I will look into that bugfix. I guess I need to test this lol.

Thanks,
Curtis.



___________________________________________________________________
Kris Lindgren
Senior Linux Systems Engineer
GoDaddy

On 10/12/16, 11:21 AM, "Curtis" <serverascode at gmail.com> wrote:

   Hi All,

   I've never used NFS with OpenStack before. But I am now with a small
   lab deployment with a few compute nodes.

   Is there anything special I should do with NFS and glance and nova? I
   remember there was an issue way back when of images being deleted b/c
   certain components weren't aware they are on NFS. I'm guessing that
   has changed but just wanted to check if there is anything specific I
   should be doing configuration-wise.

   I can't seem to find many examples of NFS usage...so feel free to
   point me to any documentation, blog posts, etc. I may have just missed
   it.

   Thanks,
   Curtis.

   _______________________________________________
   OpenStack-operators mailing list
   OpenStack-operators at lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators




--
Blog: serverascode.com



------------------------------

Message: 14
Date: Wed, 12 Oct 2016 11:34:39 -0700
From: James Penick <jpenick at gmail.com>
To: Curtis <serverascode at gmail.com>
Cc: "openstack-operators at lists.openstack.org"
<openstack-operators at lists.openstack.org>
Subject: Re: [Openstack-operators] glance, nova backed by NFS
Message-ID:
<CAMomh-6y5H_2ETGUY_2_Uoz+Sq8POULb9vsKBWwcKovB8QdvGQ at mail.gmail.com>
Content-Type: text/plain; charset="utf-8"

Are you backing both glance and nova-compute with NFS? If you're only
putting the glance store on NFS you don't need any special changes. It'll
Just Work.

On Wed, Oct 12, 2016 at 11:18 AM, Curtis <serverascode at gmail.com> wrote:


On Wed, Oct 12, 2016 at 11:58 AM, Kris G. Lindgren
<klindgren at godaddy.com> wrote:

We don?t use shared storage at all.  But I do remember what you are
talking about.  The issue is that compute nodes weren?t aware they were on
shared storage, and would nuke the backing mage from shared storage, after
all vm?s on *that* compute node had stopped using it. Not after all vm?s
had stopped using it.


https://bugs.launchpad.net/nova/+bug/1620341 - Looks like some code to
address that concern has landed  but only in trunk maybe mitaka.  Any
stable releases don?t appear to be shared backing image safe.


You might be able to get around this by setting the compute image
manager task to not run.  But the issue with that will be one missed
compute node, and everyone will have a bad day.

Cool, thanks Kris. Exactly what I was talking about. I'm on Mitaka,
and I will look into that bugfix. I guess I need to test this lol.

Thanks,
Curtis.



___________________________________________________________________
Kris Lindgren
Senior Linux Systems Engineer
GoDaddy

On 10/12/16, 11:21 AM, "Curtis" <serverascode at gmail.com> wrote:

   Hi All,

   I've never used NFS with OpenStack before. But I am now with a small
   lab deployment with a few compute nodes.

   Is there anything special I should do with NFS and glance and nova? I
   remember there was an issue way back when of images being deleted b/c
   certain components weren't aware they are on NFS. I'm guessing that
   has changed but just wanted to check if there is anything specific I
   should be doing configuration-wise.

   I can't seem to find many examples of NFS usage...so feel free to
   point me to any documentation, blog posts, etc. I may have just
missed

   it.

   Thanks,
   Curtis.

   _______________________________________________
   OpenStack-operators mailing list
   OpenStack-operators at lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/
openstack-operators





--
Blog: serverascode.com

_______________________________________________
OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20161012/d894399e/attachment-0001.html>

------------------------------

Message: 15
Date: Wed, 12 Oct 2016 12:49:40 -0600
From: Curtis <serverascode at gmail.com>
To: James Penick <jpenick at gmail.com>
Cc: "openstack-operators at lists.openstack.org"
<openstack-operators at lists.openstack.org>
Subject: Re: [Openstack-operators] glance, nova backed by NFS
Message-ID:
<CAJ_JamBYkB54yKS=V8a_b+FWKSuUtMEJFqm==mzaRj738RxWBQ at mail.gmail.com>
Content-Type: text/plain; charset=UTF-8

On Wed, Oct 12, 2016 at 12:34 PM, James Penick <jpenick at gmail.com> wrote:

Are you backing both glance and nova-compute with NFS? If you're only
putting the glance store on NFS you don't need any special changes. It'll
Just Work.

I've got both glance and nova backed by NFS. Haven't put up cinder
yet, but that will also be NFS backed. I just have very limited
storage on the compute hosts, basically just enough for the operating
system; this is just a small but permanent lab deployment. Good to
hear that Glance will Just Work. :) Thanks!

Thanks,
Curtis.



On Wed, Oct 12, 2016 at 11:18 AM, Curtis <serverascode at gmail.com> wrote:


On Wed, Oct 12, 2016 at 11:58 AM, Kris G. Lindgren
<klindgren at godaddy.com> wrote:

We don?t use shared storage at all.  But I do remember what you are
talking about.  The issue is that compute nodes weren?t aware they were on
shared storage, and would nuke the backing mage from shared storage, after
all vm?s on *that* compute node had stopped using it. Not after all vm?s had
stopped using it.

https://bugs.launchpad.net/nova/+bug/1620341 - Looks like some code to
address that concern has landed  but only in trunk maybe mitaka.  Any stable
releases don?t appear to be shared backing image safe.

You might be able to get around this by setting the compute image
manager task to not run.  But the issue with that will be one missed compute
node, and everyone will have a bad day.

Cool, thanks Kris. Exactly what I was talking about. I'm on Mitaka,
and I will look into that bugfix. I guess I need to test this lol.

Thanks,
Curtis.



___________________________________________________________________
Kris Lindgren
Senior Linux Systems Engineer
GoDaddy

On 10/12/16, 11:21 AM, "Curtis" <serverascode at gmail.com> wrote:

   Hi All,

   I've never used NFS with OpenStack before. But I am now with a small
   lab deployment with a few compute nodes.

   Is there anything special I should do with NFS and glance and nova?
I
   remember there was an issue way back when of images being deleted
b/c
   certain components weren't aware they are on NFS. I'm guessing that
   has changed but just wanted to check if there is anything specific I
   should be doing configuration-wise.

   I can't seem to find many examples of NFS usage...so feel free to
   point me to any documentation, blog posts, etc. I may have just
missed
   it.

   Thanks,
   Curtis.

   _______________________________________________
   OpenStack-operators mailing list
   OpenStack-operators at lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators




--
Blog: serverascode.com

_______________________________________________
OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators




--
Blog: serverascode.com



------------------------------

Message: 16
Date: Thu, 13 Oct 2016 08:24:16 +1300
From: Xav Paice <xavpaice at gmail.com>
To: Lutz Birkhahn <lutz.birkhahn at noris.de>
Cc: "openstack-operators at lists.openstack.org"
<openstack-operators at lists.openstack.org>
Subject: Re: [Openstack-operators] Ubuntu package for Octavia
Message-ID:
<CAMb5Lvru-USCW=GztsWxigccVeUGszYcYn-txk=R9ZqUicva8w at mail.gmail.com>
Content-Type: text/plain; charset="utf-8"

I highly recommend looking in to Giftwrap for that, until there's UCA
packages.

The thing missing from the packages that Giftwrap will produce is init
scripts, config file examples, and the various user and directory setup
stuff.  That's easy enough to put into config management or a separate
package if you wanted to.

On 13 October 2016 at 01:25, Lutz Birkhahn <lutz.birkhahn at noris.de> wrote:


Has anyone seen Ubuntu packages for Octavia yet?

We?re running Ubuntu 16.04 with Newton, but for whatever reason I can not
find any Octavia package?

So far I?ve only found in https://wiki.openstack.org/
wiki/Neutron/LBaaS/HowToRun the following:

    Ubuntu Packages Setup: Install octavia with your favorite
distribution: ?pip install octavia?

That was not exactly what we would like to do in our production cloud?

Thanks,

/lutz
_______________________________________________
OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20161013/bd85871c/attachment-0001.html>

------------------------------

Message: 17
Date: Wed, 12 Oct 2016 16:02:55 -0400
From: Warren Wang <warren at wangspeed.com>
To: Adam Kijak <adam.kijak at corp.ovh.com>
Cc: openstack-operators <openstack-operators at lists.openstack.org>
Subject: Re: [Openstack-operators] [openstack-operators][ceph][nova]
How do you handle Nova on Ceph?
Message-ID:
<CAARB8+vKoT3WAtde_vst2x4cDAOiv+S4G4TDOVAq1NniR=4kLQ at mail.gmail.com>
Content-Type: text/plain; charset="utf-8"

If fault domain is a concern, you can always split the cloud up into 3
regions, each having a dedicate Ceph cluster. It isn't necessarily going to
mean more hardware, just logical splits. This is kind of assuming that the
network doesn't share the same fault domain though.

Alternatively, you can split the hardware for the Ceph boxes into multiple
clusters, and use multi backend Cinder to talk to the same set of
hypervisors to use multiple Ceph clusters. We're doing that to migrate from
one Ceph cluster to another. You can even mount a volume from each cluster
into a single instance.

Keep in mind that you don't really want to shrink a Ceph cluster too much.
What's "too big"? You should keep growing so that the fault domains aren't
too small (3 physical rack min), or you guarantee that the entire cluster
stops if you lose network.

Just my 2 cents,
Warren

On Wed, Oct 12, 2016 at 8:35 AM, Adam Kijak <adam.kijak at corp.ovh.com> wrote:


_______________________________________
From: Abel Lopez <alopgeek at gmail.com>
Sent: Monday, October 10, 2016 9:57 PM
To: Adam Kijak
Cc: openstack-operators
Subject: Re: [Openstack-operators] [openstack-operators][ceph][nova]
How do you handle Nova on Ceph?


Have you thought about dedicated pools for cinder/nova and a separate
pool for glance, and any other uses you might have?

You need to setup secrets on kvm, but you can have cinder creating
volumes from glance images quickly in different pools

We already have separate pool for images, volumes and instances.
Separate pools doesn't really split the failure domain though.
Also AFAIK you can't set up multiple pools for instances in nova.conf,
right?

_______________________________________________
OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20161012/656c5638/attachment-0001.html>

------------------------------

Message: 18
Date: Wed, 12 Oct 2016 13:46:01 -0700
From: Clint Byrum <clint at fewbar.com>
To: openstack-operators <openstack-operators at lists.openstack.org>
Subject: Re: [Openstack-operators] [openstack-operators][ceph][nova]
How do
you handle Nova on Ceph?
Message-ID: <1476304977-sup-4753 at fewbar.com>
Content-Type: text/plain; charset=UTF-8

Excerpts from Adam Kijak's message of 2016-10-12 12:23:41 +0000:

________________________________________
From: Xav Paice <xavpaice at gmail.com>
Sent: Monday, October 10, 2016 8:41 PM
To: openstack-operators at lists.openstack.org
Subject: Re: [Openstack-operators] [openstack-operators][ceph][nova] How do you handle Nova on Ceph?

On Mon, 2016-10-10 at 13:29 +0000, Adam Kijak wrote:

Hello,

We use a Ceph cluster for Nova (Glance and Cinder as well) and over
time,
more and more data is stored there. We can't keep the cluster so big
because of
Ceph's limitations. Sooner or later it needs to be closed for adding
new
instances, images and volumes. Not to mention it's a big failure
domain.

I'm really keen to hear more about those limitations.

Basically it's all related to the failure domain ("blast radius") and risk management.
Bigger Ceph cluster means more users.

Are these risks well documented? Since Ceph is specifically designed
_not_ to have the kind of large blast radius that one might see with
say, a centralized SAN, I'm curious to hear what events trigger
cluster-wide blasts.


Growing the Ceph cluster temporary slows it down, so many users will be affected.

One might say that a Ceph cluster that can't be grown without the users
noticing is an over-subscribed Ceph cluster. My understanding is that
one is always advised to provision a certain amount of cluster capacity
for growing and replicating to replaced drives.


There are bugs in Ceph which can cause data corruption. It's rare, but when it happens
it can affect many (maybe all) users of the Ceph cluster.

:(



------------------------------

Message: 19
Date: Thu, 13 Oct 2016 13:37:58 +1100
From: Blair Bethwaite <blair.bethwaite at gmail.com>
To: "openstack-oper." <openstack-operators at lists.openstack.org>
Subject: [Openstack-operators] Disable console for an instance
Message-ID:
<CA+z5DsyKjC6z4E+xOJv_a-UKbv+bX-+bt2mXDyp3c2e-bJbovA at mail.gmail.com>
Content-Type: text/plain; charset="utf-8"

Hi all,

Does anyone know whether there is a way to disable the novnc console on a
per instance basis?

Cheers,
Blair
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20161013/afafbdc9/attachment-0001.html>

------------------------------

Message: 20
Date: Thu, 13 Oct 2016 06:12:59 +0000
From: "Juvonen, Tomi (Nokia - FI/Espoo)" <tomi.juvonen at nokia.com>
To: "OpenStack-operators at lists.openstack.org"
<OpenStack-operators at lists.openstack.org>
Subject: [Openstack-operators] host maintenance
Message-ID:
<AM4PR07MB15694E5C03F9D1E9C255957B85DC0 at AM4PR07MB1569.eurprd07.prod.outlook.com>

Content-Type: text/plain; charset="us-ascii"

Hi,

Had the session in Austin summit for the maintenance:
https://etherpad.openstack.org/p/AUS-ops-Nova-maint

Now the discussion have gotten to a point that should start prototyping a service hosting the maintenance. For maintenance Nova could have a link to this new service, but no functionality for the maintenance should be placed in Nova project. Was working to have this, but now looking better to have the prototype first:
https://review.openstack.org/310510/


From the discussion over above review, the new service might have maintenance API connection point that links to host by utilizing "hostid" used in Nova and then there should be "tenant _id" specific end point to get what needed by project. Something like:
http://maintenancethingy/maintenance/{hostid}
http://maintenancethingy/maintenance/{hostid}/{tenant_id}
This will ensure tenant will not know details about host, but can get needed information about maintenance effecting to his instances.

In Telco/NFV side we have OPNFV Doctor project that sets the requirements for this from that direction. I am personally interested in that part, but to have this to serve all operator requirements, it is best to bring this here.

This could be further discussed in Barcelona and should get other people interested to help starting with this. Any suggestion for the Ops session?

Looking forward,
Tomi


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20161013/9f01d2ce/attachment-0001.html>

------------------------------

Message: 21
Date: Thu, 13 Oct 2016 15:19:51 +0800
From: Tom Fifield <tom at openstack.org>
To: OpenStack Operators <openstack-operators at lists.openstack.org>
Subject: [Openstack-operators] Ops at Barcelona - Call for Moderators
Message-ID: <ca8e7f82-b11e-ab3f-0d7c-9cc26719ebf0 at openstack.org>
Content-Type: text/plain; charset=utf-8; format=flowed

Hello all,

The Ops design summit sessions are now listed on the schedule!


https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=Ops+Summit%3A

Please tick them and set up your summit app :)


We are still looking for moderators for the following sessions:

* OpenStack on Containers
* Containers on OpenStack
* Migration to OpenStack
* Fleet Management
* Feedback to PWG
* Neutron pain points
* Config Mgmt
* HAProy, MySQL, Rabbit Tuning
* Swift
* Horizon
* OpenStack CLI
* Baremetal Deploy
* OsOps
* CI/CD workflows
* Alt Deployment tech
* ControlPlane Design(multi region)
* Docs


==> If you are interested in moderating a session, please

* write your name in its etherpad (multiple moderators OK!)

==> I'll be honest, I have no idea what some of the sessions are
supposed to be, so also:

* write a short description for the session so the agenda can be updated


For those of you who want to know what it takes check out the
Moderator's Guide:
https://wiki.openstack.org/wiki/Operations/Meetups#Moderators_Guide &
ask questions - we're here to help!



Regards,


Tom, on behalf of the Ops Meetups Team
https://wiki.openstack.org/wiki/Ops_Meetups_Team



------------------------------

_______________________________________________
OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


End of OpenStack-operators Digest, Vol 72, Issue 11
***************************************************

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20161014/de622297/attachment.html>


More information about the OpenStack-operators mailing list