<html xmlns:o="urn:schemas-microsoft-com:office:office" xmlns:w="urn:schemas-microsoft-com:office:word" xmlns:m="http://schemas.microsoft.com/office/2004/12/omml" xmlns="http://www.w3.org/TR/REC-html40">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
<meta name="Title" content="">
<meta name="Keywords" content="">
<meta name="Generator" content="Microsoft Word 15 (filtered medium)">
<style><!--
/* Font Definitions */
@font-face
{font-family:"Cambria Math";
panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0in;
margin-bottom:.0001pt;
font-size:12.0pt;
font-family:"Times New Roman";}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:blue;
text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
{mso-style-priority:99;
color:purple;
text-decoration:underline;}
span.apple-tab-span
{mso-style-name:apple-tab-span;}
span.EmailStyle18
{mso-style-type:personal-reply;
font-family:Calibri;
color:windowtext;}
span.msoIns
{mso-style-type:export-only;
mso-style-name:"";
text-decoration:underline;
color:teal;}
.MsoChpDefault
{mso-style-type:export-only;
font-size:10.0pt;}
@page WordSection1
{size:8.5in 11.0in;
margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
{page:WordSection1;}
--></style>
</head>
<body bgcolor="white" lang="EN-US" link="blue" vlink="purple">
<div class="WordSection1">
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:Calibri">Patricia,<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:Calibri"><o:p> </o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:Calibri">I think we are announcing which session we would like to moderate and add our names in the respective etherpad.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:Calibri"><o:p> </o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:Calibri">Thanks,<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:Calibri"><o:p> </o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:Calibri">Edgar<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:Calibri"><o:p> </o:p></span></p>
<div style="border:none;border-top:solid #B5C4DF 1.0pt;padding:3.0pt 0in 0in 0in">
<p class="MsoNormal"><b><span style="font-family:Calibri;color:black">From: </span>
</b><span style="font-family:Calibri;color:black">Patricia Dugan <patricia.dugan@oneops.com><br>
<b>Date: </b>Thursday, October 13, 2016 at 7:33 AM<br>
<b>To: </b>"openstack-operators@lists.openstack.org" <openstack-operators@lists.openstack.org><br>
<b>Subject: </b>[Openstack-operators] the url for moderator request<o:p></o:p></span></p>
</div>
<div>
<p class="MsoNormal"><o:p> </o:p></p>
</div>
<p class="MsoNormal">Is this the url for the moderator request: <a href="https://urldefense.proofpoint.com/v2/url?u=https-3A__etherpad.openstack.org_p_BCN-2Dops-2Dmeetup&d=DQMFAg&c=DS6PUFBBr_KiLo7Sjt3ljp5jaW5k2i9ijVXllEdOozc&r=G0XRJfDQsuBvqa_wpWyDAUlSpeMV4W1qfWqBfctlWwQ&m=Fs7jQo7W1CxDalmICmt6lRtEZvxmw6q3UnRqFVtyvEA&s=SiPIzqCxxWaBLn3wFW-cb8NPtKKvKby5nGzI8bAs7jY&e=">https://etherpad.openstack.org/p/BCN-ops-meetup</a> <<
at the bottom, is that where we are supposed to fill out? <o:p></o:p></p>
<div>
<p class="MsoNormal"><o:p> </o:p></p>
</div>
<div>
<div>
<p class="MsoNormal"><o:p> </o:p></p>
<div>
<blockquote style="margin-top:5.0pt;margin-bottom:5.0pt">
<div>
<p class="MsoNormal">On Oct 13, 2016, at 12:19 AM, <a href="mailto:openstack-operators-request@lists.openstack.org">
openstack-operators-request@lists.openstack.org</a> wrote:<o:p></o:p></p>
</div>
<p class="MsoNormal"><o:p> </o:p></p>
<div>
<div>
<p class="MsoNormal">Send OpenStack-operators mailing list submissions to<br>
<a href="mailto:openstack-operators@lists.openstack.org">openstack-operators@lists.openstack.org</a><br>
<br>
To subscribe or unsubscribe via the World Wide Web, visit<br>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators<br>
<br>
or, via email, send a message with subject or body 'help' to<br>
openstack-operators-request@lists.openstack.org<br>
<br>
You can reach the person managing the list at<br>
openstack-operators-owner@lists.openstack.org<br>
<br>
When replying, please edit your Subject line so it is more specific<br>
than "Re: Contents of OpenStack-operators digest..."<br>
<br>
<br>
Today's Topics:<br>
<br>
1. Re: [openstack-operators][ceph][nova] How do you handle Nova<br>
on Ceph? (Adam Kijak)<br>
2. Ubuntu package for Octavia (Lutz Birkhahn)<br>
3. Re: [openstack-operators][ceph][nova] How do you handle Nova<br>
on Ceph? (Adam Kijak)<br>
4. [nova] Does anyone use the os-diagnostics API? (Matt Riedemann)<br>
5. OPNFV delivered its new Colorado release (Ulrich Kleber)<br>
6. Re: [nova] Does anyone use the os-diagnostics API? (Tim Bell)<br>
7. Re: [nova] Does anyone use the os-diagnostics<span class="apple-tab-span"><o:p></o:p></span></p>
<p class="MsoNormal">API? (Joe Topjian)<br>
8. Re: OPNFV delivered its new Colorado release (Jay Pipes)<br>
9. glance, nova backed by NFS (Curtis)<br>
10. Re: glance, nova backed by NFS (Kris G. Lindgren)<br>
11. Re: glance, nova backed by NFS (Tobias Sch?n)<br>
12. Re: glance, nova backed by NFS (Kris G. Lindgren)<br>
13. Re: glance, nova backed by NFS (Curtis)<br>
14. Re: glance, nova backed by NFS (James Penick)<br>
15. Re: glance, nova backed by NFS (Curtis)<br>
16. Re: Ubuntu package for Octavia (Xav Paice)<br>
17. Re: [openstack-operators][ceph][nova] How do you handle Nova<br>
on Ceph? (Warren Wang)<br>
18. Re: [openstack-operators][ceph][nova] How do<span class="apple-tab-span"><o:p></o:p></span></p>
<p class="MsoNormal">you handle Nova<br>
on Ceph? (Clint Byrum)<br>
19. Disable console for an instance (Blair Bethwaite)<br>
20. host maintenance (Juvonen, Tomi (Nokia - FI/Espoo))<br>
21. Ops@Barcelona - Call for Moderators (Tom Fifield)<br>
<br>
<br>
----------------------------------------------------------------------<br>
<br>
Message: 1<br>
Date: Wed, 12 Oct 2016 12:23:41 +0000<br>
From: Adam Kijak <adam.kijak@corp.ovh.com><br>
To: Xav Paice <xavpaice@gmail.com>,<br>
"openstack-operators@lists.openstack.org"<br>
<openstack-operators@lists.openstack.org><br>
Subject: Re: [Openstack-operators] [openstack-operators][ceph][nova]<br>
How do you handle Nova on Ceph?<br>
Message-ID: <839b3aba73394bf9aae56c801687e50c@corp.ovh.com><br>
Content-Type: text/plain; charset="iso-8859-1"<br>
<br>
<br>
<o:p></o:p></p>
<blockquote style="margin-top:5.0pt;margin-bottom:5.0pt">
<p class="MsoNormal">________________________________________<br>
From: Xav Paice <xavpaice@gmail.com><br>
Sent: Monday, October 10, 2016 8:41 PM<br>
To: openstack-operators@lists.openstack.org<br>
Subject: Re: [Openstack-operators] [openstack-operators][ceph][nova] How do you handle Nova on Ceph?<br>
<br>
On Mon, 2016-10-10 at 13:29 +0000, Adam Kijak wrote:<br>
<br>
<o:p></o:p></p>
<blockquote style="margin-top:5.0pt;margin-bottom:5.0pt">
<p class="MsoNormal">Hello,<br>
<br>
We use a Ceph cluster for Nova (Glance and Cinder as well) and over<br>
time,<br>
more and more data is stored there. We can't keep the cluster so big<br>
because of<br>
Ceph's limitations. Sooner or later it needs to be closed for adding<br>
new<br>
instances, images and volumes. Not to mention it's a big failure<br>
domain.<o:p></o:p></p>
</blockquote>
<p class="MsoNormal"><br>
I'm really keen to hear more about those limitations.<o:p></o:p></p>
</blockquote>
<p class="MsoNormal"><br>
Basically it's all related to the failure domain ("blast radius") and risk management.<br>
Bigger Ceph cluster means more users.<br>
Growing the Ceph cluster temporary slows it down, so many users will be affected.<br>
There are bugs in Ceph which can cause data corruption. It's rare, but when it happens
<br>
it can affect many (maybe all) users of the Ceph cluster.<br>
<br>
<br>
<o:p></o:p></p>
<blockquote style="margin-top:5.0pt;margin-bottom:5.0pt">
<blockquote style="margin-top:5.0pt;margin-bottom:5.0pt">
<p class="MsoNormal"><br>
How do you handle this issue?<br>
What is your strategy to divide Ceph clusters between compute nodes?<br>
How do you solve VM snapshot placement and migration issues then<br>
(snapshots will be left on older Ceph)?<o:p></o:p></p>
</blockquote>
<p class="MsoNormal"><br>
Having played with Ceph and compute on the same hosts, I'm a big fan of<br>
separating them and having dedicated Ceph hosts, and dedicated compute<br>
hosts. That allows me a lot more flexibility with hardware<br>
configuration and maintenance, easier troubleshooting for resource<br>
contention, and also allows scaling at different rates.<o:p></o:p></p>
</blockquote>
<p class="MsoNormal"><br>
Exactly, I consider it the best practice as well.<br>
<br>
<br>
<br>
<br>
------------------------------<br>
<br>
Message: 2<br>
Date: Wed, 12 Oct 2016 12:25:38 +0000<br>
From: Lutz Birkhahn <lutz.birkhahn@noris.de><br>
To: "openstack-operators@lists.openstack.org"<br>
<openstack-operators@lists.openstack.org><br>
Subject: [Openstack-operators] Ubuntu package for Octavia<br>
Message-ID: <B1F091B0-61C6-4AB2-AEEC-F2F458101C11@noris.de><br>
Content-Type: text/plain; charset="utf-8"<br>
<br>
Has anyone seen Ubuntu packages for Octavia yet?<br>
<br>
We?re running Ubuntu 16.04 with Newton, but for whatever reason I can not find any Octavia package?<br>
<br>
So far I?ve only found in https://wiki.openstack.org/wiki/Neutron/LBaaS/HowToRun the following:<br>
<br>
Ubuntu Packages Setup: Install octavia with your favorite distribution: ?pip install octavia?<br>
<br>
That was not exactly what we would like to do in our production cloud?<br>
<br>
Thanks,<br>
<br>
/lutz<br>
-------------- next part --------------<br>
A non-text attachment was scrubbed...<br>
Name: smime.p7s<br>
Type: application/x-pkcs7-signature<br>
Size: 6404 bytes<br>
Desc: not available<br>
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20161012/7a1bed2f/attachment-0001.bin><br>
<br>
------------------------------<br>
<br>
Message: 3<br>
Date: Wed, 12 Oct 2016 12:35:48 +0000<br>
From: Adam Kijak <adam.kijak@corp.ovh.com><br>
To: Abel Lopez <alopgeek@gmail.com><br>
Cc: openstack-operators <openstack-operators@lists.openstack.org><br>
Subject: Re: [Openstack-operators] [openstack-operators][ceph][nova]<br>
How do you handle Nova on Ceph?<br>
Message-ID: <29c531e1c0614dc1bd1cf587d69aa45b@corp.ovh.com><br>
Content-Type: text/plain; charset="iso-8859-1"<br>
<br>
<br>
<o:p></o:p></p>
<blockquote style="margin-top:5.0pt;margin-bottom:5.0pt">
<p class="MsoNormal">_______________________________________<br>
From: Abel Lopez <alopgeek@gmail.com><br>
Sent: Monday, October 10, 2016 9:57 PM<br>
To: Adam Kijak<br>
Cc: openstack-operators<br>
Subject: Re: [Openstack-operators] [openstack-operators][ceph][nova] How do you handle Nova on Ceph?<br>
<br>
Have you thought about dedicated pools for cinder/nova and a separate pool for glance, and any other uses you might have?<br>
You need to setup secrets on kvm, but you can have cinder creating volumes from glance images quickly in different pools<o:p></o:p></p>
</blockquote>
<p class="MsoNormal"><br>
We already have separate pool for images, volumes and instances. <br>
Separate pools doesn't really split the failure domain though.<br>
Also AFAIK you can't set up multiple pools for instances in nova.conf, right?<br>
<br>
<br>
<br>
------------------------------<br>
<br>
Message: 4<br>
Date: Wed, 12 Oct 2016 09:00:11 -0500<br>
From: Matt Riedemann <mriedem@linux.vnet.ibm.com><br>
To: "openstack-operators@lists.openstack.org"<br>
<openstack-operators@lists.openstack.org><br>
Subject: [Openstack-operators] [nova] Does anyone use the<br>
os-diagnostics API?<br>
Message-ID: <5dae5c89-b682-15c7-11c6-d9a5481076a4@linux.vnet.ibm.com><br>
Content-Type: text/plain; charset=utf-8; format=flowed<br>
<br>
The current form of the nova os-diagnostics API is hypervisor-specific, <br>
which makes it pretty unusable in any generic way, which is why Tempest <br>
doesn't test it.<br>
<br>
Way back when the v3 API was a thing for 2 minutes there was work done <br>
to standardize the diagnostics information across virt drivers in nova. <br>
The only thing is we haven't exposed that out of the REST API yet, but <br>
there is a spec proposing to do that now:<br>
<br>
https://review.openstack.org/#/c/357884/<br>
<br>
This is an admin-only API so we're trying to keep an end user point of <br>
view out of discussing it. For example, the disk details don't have any <br>
unique identifier. We could add one, but would it be useful to an admin?<br>
<br>
This API is really supposed to be for debug, but the question I have for <br>
this list is does anyone actually use the existing os-diagnostics API? <br>
And if so, how do you use it, and what information is most useful? If <br>
you are using it, please review the spec and provide any input on what's <br>
proposed for outputs.<br>
<br>
-- <br>
<br>
Thanks,<br>
<br>
Matt Riedemann<br>
<br>
<br>
<br>
<br>
------------------------------<br>
<br>
Message: 5<br>
Date: Wed, 12 Oct 2016 14:17:50 +0000<br>
From: Ulrich Kleber <Ulrich.Kleber@huawei.com><br>
To: "openstack-operators@lists.openstack.org"<br>
<openstack-operators@lists.openstack.org><br>
Subject: [Openstack-operators] OPNFV delivered its new Colorado<br>
release<br>
Message-ID: <884BFBB6F562F44F91BF83F77C24E9972E7EB5F0@lhreml507-mbx><br>
Content-Type: text/plain; charset="iso-8859-1"<br>
<br>
Hi,<br>
I didn't see an official announcement, so I like to point you to the new release of OPNFV.<br>
https://www.opnfv.org/news-faq/press-release/2016/09/open-source-nfv-project-delivers-third-platform-release-introduces-0<br>
OPNFV is an open source project and one of the most important users of OpenStack in the Telecom/NFV area. Maybe it is interesting for your work.<br>
Feel free to contact me or meet during the Barcelona summit at the session of the OpenStack Operators Telecom/NFV Functional Team (https://www.openstack.org/summit/barcelona-2016/summit-schedule/events/16768/openstack-operators-telecomnfv-functional-team).<br>
Cheers,<br>
Uli<br>
<br>
<br>
Ulrich KLEBER<br>
Chief Architect Cloud Platform<br>
European Research Center<br>
IT R&D Division<br>
[huawei_logo]<br>
Riesstra?e 25<br>
80992 M?nchen<br>
Mobile: +49 (0)173 4636144<br>
Mobile (China): +86 13005480404<br>
<br>
<br>
<br>
-------------- next part --------------<br>
An HTML attachment was scrubbed...<br>
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20161012/7dcc7cae/attachment-0001.html><br>
-------------- next part --------------<br>
A non-text attachment was scrubbed...<br>
Name: image001.jpg<br>
Type: image/jpeg<br>
Size: 6737 bytes<br>
Desc: image001.jpg<br>
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20161012/7dcc7cae/attachment-0001.jpg><br>
<br>
------------------------------<br>
<br>
Message: 6<br>
Date: Wed, 12 Oct 2016 14:35:54 +0000<br>
From: Tim Bell <Tim.Bell@cern.ch><br>
To: Matt Riedemann <mriedem@linux.vnet.ibm.com><br>
Cc: "openstack-operators@lists.openstack.org"<br>
<openstack-operators@lists.openstack.org><br>
Subject: Re: [Openstack-operators] [nova] Does anyone use the<br>
os-diagnostics API?<br>
Message-ID: <248C8965-1ECE-4CA3-9B88-A7C75CF8B3AD@cern.ch><br>
Content-Type: text/plain; charset="utf-8"<br>
<br>
<br>
<br>
<o:p></o:p></p>
<blockquote style="margin-top:5.0pt;margin-bottom:5.0pt">
<p class="MsoNormal" style="margin-bottom:12.0pt">On 12 Oct 2016, at 07:00, Matt Riedemann <mriedem@linux.vnet.ibm.com> wrote:<br>
<br>
The current form of the nova os-diagnostics API is hypervisor-specific, which makes it pretty unusable in any generic way, which is why Tempest doesn't test it.<br>
<br>
Way back when the v3 API was a thing for 2 minutes there was work done to standardize the diagnostics information across virt drivers in nova. The only thing is we haven't exposed that out of the REST API yet, but there is a spec proposing to do that now:<br>
<br>
https://review.openstack.org/#/c/357884/<br>
<br>
This is an admin-only API so we're trying to keep an end user point of view out of discussing it. For example, the disk details don't have any unique identifier. We could add one, but would it be useful to an admin?<br>
<br>
This API is really supposed to be for debug, but the question I have for this list is does anyone actually use the existing os-diagnostics API? And if so, how do you use it, and what information is most useful? If you are using it, please review the spec and
provide any input on what's proposed for outputs.<o:p></o:p></p>
</blockquote>
<p class="MsoNormal"><br>
Matt,<br>
<br>
Thanks for asking. We?ve used the API in the past as a way of getting the usage data out of Nova. We had problems running ceilometer at scale and this was a way of retrieving the data for our accounting reports. We created a special policy configuration to
allow authorised users query this data without full admin rights.<br>
<br>
From the look of the new spec, it would be fairly straightforward to adapt the process to use the new format as all the CPU utilisation data is there.<br>
<br>
Tim<br>
<br>
<br>
<o:p></o:p></p>
<blockquote style="margin-top:5.0pt;margin-bottom:5.0pt">
<p class="MsoNormal">-- <br>
<br>
Thanks,<br>
<br>
Matt Riedemann<br>
<br>
<br>
_______________________________________________<br>
OpenStack-operators mailing list<br>
OpenStack-operators@lists.openstack.org<br>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators<o:p></o:p></p>
</blockquote>
<p class="MsoNormal"><br>
<br>
------------------------------<br>
<br>
Message: 7<br>
Date: Wed, 12 Oct 2016 08:44:14 -0600<br>
From: Joe Topjian <joe@topjian.net><br>
To: "openstack-operators@lists.openstack.org"<br>
<openstack-operators@lists.openstack.org><br>
Subject: Re: [Openstack-operators] [nova] Does anyone use the<br>
os-diagnostics<span class="apple-tab-span"><o:p></o:p></span></p>
<p class="MsoNormal">API?<br>
Message-ID:<br>
<CA+y7hvg9V8sa-esRGK-mW=OGhiDv5+P7ak=u4zGzppmwZdu6Jg@mail.gmail.com><br>
Content-Type: text/plain; charset="utf-8"<br>
<br>
Hi Matt, Tim,<br>
<br>
Thanks for asking. We?ve used the API in the past as a way of getting the<br>
<br>
<o:p></o:p></p>
<blockquote style="margin-top:5.0pt;margin-bottom:5.0pt">
<p class="MsoNormal" style="margin-bottom:12.0pt">usage data out of Nova. We had problems running ceilometer at scale and<br>
this was a way of retrieving the data for our accounting reports. We<br>
created a special policy configuration to allow authorised users query this<br>
data without full admin rights.<o:p></o:p></p>
</blockquote>
<p class="MsoNormal"><br>
We do this as well.<br>
<br>
<br>
<br>
<o:p></o:p></p>
<blockquote style="margin-top:5.0pt;margin-bottom:5.0pt">
<p class="MsoNormal" style="margin-bottom:12.0pt">From the look of the new spec, it would be fairly straightforward to adapt<br>
the process to use the new format as all the CPU utilisation data is there.<o:p></o:p></p>
</blockquote>
<p class="MsoNormal"><br>
I agree.<br>
-------------- next part --------------<br>
An HTML attachment was scrubbed...<br>
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20161012/dda817a4/attachment-0001.html><br>
<br>
------------------------------<br>
<br>
Message: 8<br>
Date: Wed, 12 Oct 2016 13:02:11 -0400<br>
From: Jay Pipes <jaypipes@gmail.com><br>
To: openstack-operators@lists.openstack.org<br>
Subject: Re: [Openstack-operators] OPNFV delivered its new Colorado<br>
release<br>
Message-ID: <fe2e53bb-6ed2-c1af-bbee-97c3220a30c2@gmail.com><br>
Content-Type: text/plain; charset=windows-1252; format=flowed<br>
<br>
On 10/12/2016 10:17 AM, Ulrich Kleber wrote:<br>
<br>
<o:p></o:p></p>
<blockquote style="margin-top:5.0pt;margin-bottom:5.0pt">
<p class="MsoNormal">Hi,<br>
<br>
I didn?t see an official announcement, so I like to point you to the new<br>
release of OPNFV.<br>
<br>
https://www.opnfv.org/news-faq/press-release/2016/09/open-source-nfv-project-delivers-third-platform-release-introduces-0<br>
<br>
OPNFV is an open source project and one of the most important users of<br>
OpenStack in the Telecom/NFV area. Maybe it is interesting for your work.<o:p></o:p></p>
</blockquote>
<p class="MsoNormal"><br>
Hi Ulrich,<br>
<br>
I'm hoping you can explain to me what exactly OPNFV is producing in its <br>
releases. I've been through a number of the Jira items linked in the <br>
press release above and simply cannot tell what is being actually <br>
delivered by OPNFV versus what is just something that is in an OpenStack <br>
component or deployment.<br>
<br>
A good example of this is the IPV6 project's Jira item here:<br>
<br>
https://jira.opnfv.org/browse/IPVSIX-37<br>
<br>
Which has the title of "Auto-installation of both underlay IPv6 and <br>
overlay IPv6". The issue is marked as "Fixed" in Colorado 1.0. However, <br>
I can't tell what code was produced in OPNFV that delivers the <br>
auto-installation of both an underlay IPv6 and an overlay IPv6.<br>
<br>
In short, I'm confused about what OPNFV is producing and hope to get <br>
some insights from you.<br>
<br>
Best,<br>
-jay<br>
<br>
<br>
<br>
------------------------------<br>
<br>
Message: 9<br>
Date: Wed, 12 Oct 2016 11:21:27 -0600<br>
From: Curtis <serverascode@gmail.com><br>
To: "openstack-operators@lists.openstack.org"<br>
<openstack-operators@lists.openstack.org><br>
Subject: [Openstack-operators] glance, nova backed by NFS<br>
Message-ID:<br>
<CAJ_JamBDSb1q9zz2oBwHRBaee5-V0tZJoRd=8yYRPnBFxy4H+g@mail.gmail.com><br>
Content-Type: text/plain; charset=UTF-8<br>
<br>
Hi All,<br>
<br>
I've never used NFS with OpenStack before. But I am now with a small<br>
lab deployment with a few compute nodes.<br>
<br>
Is there anything special I should do with NFS and glance and nova? I<br>
remember there was an issue way back when of images being deleted b/c<br>
certain components weren't aware they are on NFS. I'm guessing that<br>
has changed but just wanted to check if there is anything specific I<br>
should be doing configuration-wise.<br>
<br>
I can't seem to find many examples of NFS usage...so feel free to<br>
point me to any documentation, blog posts, etc. I may have just missed<br>
it.<br>
<br>
Thanks,<br>
Curtis.<br>
<br>
<br>
<br>
------------------------------<br>
<br>
Message: 10<br>
Date: Wed, 12 Oct 2016 17:58:47 +0000<br>
From: "Kris G. Lindgren" <klindgren@godaddy.com><br>
To: Curtis <serverascode@gmail.com>,<br>
"openstack-operators@lists.openstack.org"<br>
<openstack-operators@lists.openstack.org><br>
Subject: Re: [Openstack-operators] glance, nova backed by NFS<br>
Message-ID: <94226DBF-0D6F-4585-9341-37E193C5F0E6@godaddy.com><br>
Content-Type: text/plain; charset="utf-8"<br>
<br>
We don?t use shared storage at all. But I do remember what you are talking about. The issue is that compute nodes weren?t aware they were on shared storage, and would nuke the backing mage from shared storage, after all vm?s on *that* compute node had stopped
using it. Not after all vm?s had stopped using it.<br>
<br>
https://bugs.launchpad.net/nova/+bug/1620341 - Looks like some code to address that concern has landed but only in trunk maybe mitaka. Any stable releases don?t appear to be shared backing image safe.<br>
<br>
You might be able to get around this by setting the compute image manager task to not run. But the issue with that will be one missed compute node, and everyone will have a bad day.<br>
<br>
___________________________________________________________________<br>
Kris Lindgren<br>
Senior Linux Systems Engineer<br>
GoDaddy<br>
<br>
On 10/12/16, 11:21 AM, "Curtis" <serverascode@gmail.com> wrote:<br>
<br>
Hi All,<br>
<br>
I've never used NFS with OpenStack before. But I am now with a small<br>
lab deployment with a few compute nodes.<br>
<br>
Is there anything special I should do with NFS and glance and nova? I<br>
remember there was an issue way back when of images being deleted b/c<br>
certain components weren't aware they are on NFS. I'm guessing that<br>
has changed but just wanted to check if there is anything specific I<br>
should be doing configuration-wise.<br>
<br>
I can't seem to find many examples of NFS usage...so feel free to<br>
point me to any documentation, blog posts, etc. I may have just missed<br>
it.<br>
<br>
Thanks,<br>
Curtis.<br>
<br>
_______________________________________________<br>
OpenStack-operators mailing list<br>
OpenStack-operators@lists.openstack.org<br>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators<br>
<br>
<br>
<br>
------------------------------<br>
<br>
Message: 11<br>
Date: Wed, 12 Oct 2016 17:59:13 +0000<br>
From: Tobias Sch?n <Tobias.Schon@fiberdata.se><br>
To: Curtis <serverascode@gmail.com>,<br>
"openstack-operators@lists.openstack.org"<br>
<openstack-operators@lists.openstack.org><br>
Subject: Re: [Openstack-operators] glance, nova backed by NFS<br>
Message-ID: <58f74b23f1254f5886df90183092a32b@elara.ad.fiberdata.se><br>
Content-Type: text/plain; charset="iso-8859-1"<br>
<br>
Hi,<br>
<br>
We have an environment with glance and cinder using NFS.<br>
It's important that they have the correct rights. The shares should be owned by nova on compute if mounted up on /var/lib/nova/instances<br>
And the same for nova and glance on the controller..<br>
<br>
It's important that you map the glance and nova up in fstab.<br>
<br>
The cinder one is controlled by the nfsdriver.<br>
<br>
We are running rhelosp6, Openstack Juno.<br>
<br>
This parameter is used:<br>
nfs_shares_config=/etc/cinder/shares-nfs.conf in the /etc/cinder/cinder.conf file and then we have specified the share in /etc/cinder/shares-nfs.conf.<br>
<br>
chmod 0640 /etc/cinder/shares-nfs.conf<br>
<br>
setsebool -P virt_use_nfs on<br>
This one is important to make it work with SELinux<br>
<br>
How up to date this is actually I don't know tbh, but it was up to date as of redhat documentation when we deployed it around 1.5y ago.<br>
<br>
//Tobias<br>
<br>
-----Ursprungligt meddelande-----<br>
Fr?n: Curtis [mailto:serverascode@gmail.com] <br>
Skickat: den 12 oktober 2016 19:21<br>
Till: openstack-operators@lists.openstack.org<br>
?mne: [Openstack-operators] glance, nova backed by NFS<br>
<br>
Hi All,<br>
<br>
I've never used NFS with OpenStack before. But I am now with a small lab deployment with a few compute nodes.<br>
<br>
Is there anything special I should do with NFS and glance and nova? I remember there was an issue way back when of images being deleted b/c certain components weren't aware they are on NFS. I'm guessing that has changed but just wanted to check if there is
anything specific I should be doing configuration-wise.<br>
<br>
I can't seem to find many examples of NFS usage...so feel free to point me to any documentation, blog posts, etc. I may have just missed it.<br>
<br>
Thanks,<br>
Curtis.<br>
<br>
_______________________________________________<br>
OpenStack-operators mailing list<br>
OpenStack-operators@lists.openstack.org<br>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators<br>
<br>
<br>
<br>
------------------------------<br>
<br>
Message: 12<br>
Date: Wed, 12 Oct 2016 18:06:31 +0000<br>
From: "Kris G. Lindgren" <klindgren@godaddy.com><br>
To: Tobias Sch?n <Tobias.Schon@fiberdata.se>, Curtis<br>
<serverascode@gmail.com>, "openstack-operators@lists.openstack.org"<br>
<openstack-operators@lists.openstack.org><br>
Subject: Re: [Openstack-operators] glance, nova backed by NFS<br>
Message-ID: <88AE471E-81CD-4E72-935D-4390C05F5D33@godaddy.com><br>
Content-Type: text/plain; charset="utf-8"<br>
<br>
Tobias does bring up something that we have ran into before.<br>
<br>
With NFSv3 user mapping is done by ID, so you need to ensure that all of your servers use the same UID for nova/glance. If you are using packages/automation that do useradd?s without the same userid its *VERY* easy to have mismatched username/uid?s across
multiple boxes.<br>
<br>
NFSv4, iirc, sends the username and the nfs server does the translation of the name to uid, so it should not have this issue. But we have been bit by that more than once on nfsv3.<br>
<br>
<br>
___________________________________________________________________<br>
Kris Lindgren<br>
Senior Linux Systems Engineer<br>
GoDaddy<br>
<br>
On 10/12/16, 11:59 AM, "Tobias Sch?n" <Tobias.Schon@fiberdata.se> wrote:<br>
<br>
Hi,<br>
<br>
We have an environment with glance and cinder using NFS.<br>
It's important that they have the correct rights. The shares should be owned by nova on compute if mounted up on /var/lib/nova/instances<br>
And the same for nova and glance on the controller..<br>
<br>
It's important that you map the glance and nova up in fstab.<br>
<br>
The cinder one is controlled by the nfsdriver.<br>
<br>
We are running rhelosp6, Openstack Juno.<br>
<br>
This parameter is used:<br>
nfs_shares_config=/etc/cinder/shares-nfs.conf in the /etc/cinder/cinder.conf file and then we have specified the share in /etc/cinder/shares-nfs.conf.<br>
<br>
chmod 0640 /etc/cinder/shares-nfs.conf<br>
<br>
setsebool -P virt_use_nfs on<br>
This one is important to make it work with SELinux<br>
<br>
How up to date this is actually I don't know tbh, but it was up to date as of redhat documentation when we deployed it around 1.5y ago.<br>
<br>
//Tobias<br>
<br>
-----Ursprungligt meddelande-----<br>
Fr?n: Curtis [mailto:serverascode@gmail.com] <br>
Skickat: den 12 oktober 2016 19:21<br>
Till: openstack-operators@lists.openstack.org<br>
?mne: [Openstack-operators] glance, nova backed by NFS<br>
<br>
Hi All,<br>
<br>
I've never used NFS with OpenStack before. But I am now with a small lab deployment with a few compute nodes.<br>
<br>
Is there anything special I should do with NFS and glance and nova? I remember there was an issue way back when of images being deleted b/c certain components weren't aware they are on NFS. I'm guessing that has changed but just wanted to check if there
is anything specific I should be doing configuration-wise.<br>
<br>
I can't seem to find many examples of NFS usage...so feel free to point me to any documentation, blog posts, etc. I may have just missed it.<br>
<br>
Thanks,<br>
Curtis.<br>
<br>
_______________________________________________<br>
OpenStack-operators mailing list<br>
OpenStack-operators@lists.openstack.org<br>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators<br>
<br>
_______________________________________________<br>
OpenStack-operators mailing list<br>
OpenStack-operators@lists.openstack.org<br>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators<br>
<br>
<br>
<br>
------------------------------<br>
<br>
Message: 13<br>
Date: Wed, 12 Oct 2016 12:18:40 -0600<br>
From: Curtis <serverascode@gmail.com><br>
To: "Kris G. Lindgren" <klindgren@godaddy.com><br>
Cc: "openstack-operators@lists.openstack.org"<br>
<openstack-operators@lists.openstack.org><br>
Subject: Re: [Openstack-operators] glance, nova backed by NFS<br>
Message-ID:<br>
<CAJ_JamDhHBm7APEWDO1HMEfm7YEb3rT7x_cOjxycRp3JHvOxHQ@mail.gmail.com><br>
Content-Type: text/plain; charset=UTF-8<br>
<br>
On Wed, Oct 12, 2016 at 11:58 AM, Kris G. Lindgren<br>
<klindgren@godaddy.com> wrote:<br>
<br>
<o:p></o:p></p>
<blockquote style="margin-top:5.0pt;margin-bottom:5.0pt">
<p class="MsoNormal">We don?t use shared storage at all. But I do remember what you are talking about. The issue is that compute nodes weren?t aware they were on shared storage, and would nuke the backing mage from shared storage, after all vm?s on *that*
compute node had stopped using it. Not after all vm?s had stopped using it.<br>
<br>
https://bugs.launchpad.net/nova/+bug/1620341 - Looks like some code to address that concern has landed but only in trunk maybe mitaka. Any stable releases don?t appear to be shared backing image safe.<br>
<br>
You might be able to get around this by setting the compute image manager task to not run. But the issue with that will be one missed compute node, and everyone will have a bad day.<o:p></o:p></p>
</blockquote>
<p class="MsoNormal"><br>
Cool, thanks Kris. Exactly what I was talking about. I'm on Mitaka,<br>
and I will look into that bugfix. I guess I need to test this lol.<br>
<br>
Thanks,<br>
Curtis.<br>
<br>
<br>
<o:p></o:p></p>
<blockquote style="margin-top:5.0pt;margin-bottom:5.0pt">
<p class="MsoNormal" style="margin-bottom:12.0pt"><br>
___________________________________________________________________<br>
Kris Lindgren<br>
Senior Linux Systems Engineer<br>
GoDaddy<br>
<br>
On 10/12/16, 11:21 AM, "Curtis" <serverascode@gmail.com> wrote:<br>
<br>
Hi All,<br>
<br>
I've never used NFS with OpenStack before. But I am now with a small<br>
lab deployment with a few compute nodes.<br>
<br>
Is there anything special I should do with NFS and glance and nova? I<br>
remember there was an issue way back when of images being deleted b/c<br>
certain components weren't aware they are on NFS. I'm guessing that<br>
has changed but just wanted to check if there is anything specific I<br>
should be doing configuration-wise.<br>
<br>
I can't seem to find many examples of NFS usage...so feel free to<br>
point me to any documentation, blog posts, etc. I may have just missed<br>
it.<br>
<br>
Thanks,<br>
Curtis.<br>
<br>
_______________________________________________<br>
OpenStack-operators mailing list<br>
OpenStack-operators@lists.openstack.org<br>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators<br>
<br>
<o:p></o:p></p>
</blockquote>
<p class="MsoNormal"><br>
<br>
<br>
-- <br>
Blog: serverascode.com<br>
<br>
<br>
<br>
------------------------------<br>
<br>
Message: 14<br>
Date: Wed, 12 Oct 2016 11:34:39 -0700<br>
From: James Penick <jpenick@gmail.com><br>
To: Curtis <serverascode@gmail.com><br>
Cc: "openstack-operators@lists.openstack.org"<br>
<openstack-operators@lists.openstack.org><br>
Subject: Re: [Openstack-operators] glance, nova backed by NFS<br>
Message-ID:<br>
<CAMomh-6y5H_2ETGUY_2_Uoz+Sq8POULb9vsKBWwcKovB8QdvGQ@mail.gmail.com><br>
Content-Type: text/plain; charset="utf-8"<br>
<br>
Are you backing both glance and nova-compute with NFS? If you're only<br>
putting the glance store on NFS you don't need any special changes. It'll<br>
Just Work.<br>
<br>
On Wed, Oct 12, 2016 at 11:18 AM, Curtis <serverascode@gmail.com> wrote:<br>
<br>
<br>
<o:p></o:p></p>
<blockquote style="margin-top:5.0pt;margin-bottom:5.0pt">
<p class="MsoNormal">On Wed, Oct 12, 2016 at 11:58 AM, Kris G. Lindgren<br>
<klindgren@godaddy.com> wrote:<br>
<br>
<o:p></o:p></p>
<blockquote style="margin-top:5.0pt;margin-bottom:5.0pt">
<p class="MsoNormal">We don?t use shared storage at all. But I do remember what you are<o:p></o:p></p>
</blockquote>
<p class="MsoNormal">talking about. The issue is that compute nodes weren?t aware they were on<br>
shared storage, and would nuke the backing mage from shared storage, after<br>
all vm?s on *that* compute node had stopped using it. Not after all vm?s<br>
had stopped using it.<br>
<br>
<o:p></o:p></p>
<blockquote style="margin-top:5.0pt;margin-bottom:5.0pt">
<p class="MsoNormal"><br>
https://bugs.launchpad.net/nova/+bug/1620341 - Looks like some code to<o:p></o:p></p>
</blockquote>
<p class="MsoNormal">address that concern has landed but only in trunk maybe mitaka. Any<br>
stable releases don?t appear to be shared backing image safe.<br>
<br>
<o:p></o:p></p>
<blockquote style="margin-top:5.0pt;margin-bottom:5.0pt">
<p class="MsoNormal"><br>
You might be able to get around this by setting the compute image<o:p></o:p></p>
</blockquote>
<p class="MsoNormal">manager task to not run. But the issue with that will be one missed<br>
compute node, and everyone will have a bad day.<br>
<br>
Cool, thanks Kris. Exactly what I was talking about. I'm on Mitaka,<br>
and I will look into that bugfix. I guess I need to test this lol.<br>
<br>
Thanks,<br>
Curtis.<br>
<br>
<br>
<o:p></o:p></p>
<blockquote style="margin-top:5.0pt;margin-bottom:5.0pt">
<p class="MsoNormal"><br>
___________________________________________________________________<br>
Kris Lindgren<br>
Senior Linux Systems Engineer<br>
GoDaddy<br>
<br>
On 10/12/16, 11:21 AM, "Curtis" <serverascode@gmail.com> wrote:<br>
<br>
Hi All,<br>
<br>
I've never used NFS with OpenStack before. But I am now with a small<br>
lab deployment with a few compute nodes.<br>
<br>
Is there anything special I should do with NFS and glance and nova? I<br>
remember there was an issue way back when of images being deleted b/c<br>
certain components weren't aware they are on NFS. I'm guessing that<br>
has changed but just wanted to check if there is anything specific I<br>
should be doing configuration-wise.<br>
<br>
I can't seem to find many examples of NFS usage...so feel free to<br>
point me to any documentation, blog posts, etc. I may have just<o:p></o:p></p>
</blockquote>
<p class="MsoNormal">missed<br>
<br>
<o:p></o:p></p>
<blockquote style="margin-top:5.0pt;margin-bottom:5.0pt">
<p class="MsoNormal"> it.<br>
<br>
Thanks,<br>
Curtis.<br>
<br>
_______________________________________________<br>
OpenStack-operators mailing list<br>
OpenStack-operators@lists.openstack.org<br>
http://lists.openstack.org/cgi-bin/mailman/listinfo/<o:p></o:p></p>
</blockquote>
<p class="MsoNormal">openstack-operators<br>
<br>
<o:p></o:p></p>
<blockquote style="margin-top:5.0pt;margin-bottom:5.0pt">
<p class="MsoNormal" style="margin-bottom:12.0pt"><o:p> </o:p></p>
</blockquote>
<p class="MsoNormal" style="margin-bottom:12.0pt"><br>
<br>
<br>
--<br>
Blog: serverascode.com<br>
<br>
_______________________________________________<br>
OpenStack-operators mailing list<br>
OpenStack-operators@lists.openstack.org<br>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators<o:p></o:p></p>
</blockquote>
<p class="MsoNormal">-------------- next part --------------<br>
An HTML attachment was scrubbed...<br>
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20161012/d894399e/attachment-0001.html><br>
<br>
------------------------------<br>
<br>
Message: 15<br>
Date: Wed, 12 Oct 2016 12:49:40 -0600<br>
From: Curtis <serverascode@gmail.com><br>
To: James Penick <jpenick@gmail.com><br>
Cc: "openstack-operators@lists.openstack.org"<br>
<openstack-operators@lists.openstack.org><br>
Subject: Re: [Openstack-operators] glance, nova backed by NFS<br>
Message-ID:<br>
<CAJ_JamBYkB54yKS=V8a_b+FWKSuUtMEJFqm==mzaRj738RxWBQ@mail.gmail.com><br>
Content-Type: text/plain; charset=UTF-8<br>
<br>
On Wed, Oct 12, 2016 at 12:34 PM, James Penick <jpenick@gmail.com> wrote:<br>
<br>
<o:p></o:p></p>
<blockquote style="margin-top:5.0pt;margin-bottom:5.0pt">
<p class="MsoNormal">Are you backing both glance and nova-compute with NFS? If you're only<br>
putting the glance store on NFS you don't need any special changes. It'll<br>
Just Work.<o:p></o:p></p>
</blockquote>
<p class="MsoNormal"><br>
I've got both glance and nova backed by NFS. Haven't put up cinder<br>
yet, but that will also be NFS backed. I just have very limited<br>
storage on the compute hosts, basically just enough for the operating<br>
system; this is just a small but permanent lab deployment. Good to<br>
hear that Glance will Just Work. :) Thanks!<br>
<br>
Thanks,<br>
Curtis.<br>
<br>
<br>
<o:p></o:p></p>
<blockquote style="margin-top:5.0pt;margin-bottom:5.0pt">
<p class="MsoNormal"><br>
On Wed, Oct 12, 2016 at 11:18 AM, Curtis <serverascode@gmail.com> wrote:<br>
<br>
<o:p></o:p></p>
<blockquote style="margin-top:5.0pt;margin-bottom:5.0pt">
<p class="MsoNormal"><br>
On Wed, Oct 12, 2016 at 11:58 AM, Kris G. Lindgren<br>
<klindgren@godaddy.com> wrote:<br>
<br>
<o:p></o:p></p>
<blockquote style="margin-top:5.0pt;margin-bottom:5.0pt">
<p class="MsoNormal">We don?t use shared storage at all. But I do remember what you are<br>
talking about. The issue is that compute nodes weren?t aware they were on<br>
shared storage, and would nuke the backing mage from shared storage, after<br>
all vm?s on *that* compute node had stopped using it. Not after all vm?s had<br>
stopped using it.<br>
<br>
https://bugs.launchpad.net/nova/+bug/1620341 - Looks like some code to<br>
address that concern has landed but only in trunk maybe mitaka. Any stable<br>
releases don?t appear to be shared backing image safe.<br>
<br>
You might be able to get around this by setting the compute image<br>
manager task to not run. But the issue with that will be one missed compute<br>
node, and everyone will have a bad day.<o:p></o:p></p>
</blockquote>
<p class="MsoNormal"><br>
Cool, thanks Kris. Exactly what I was talking about. I'm on Mitaka,<br>
and I will look into that bugfix. I guess I need to test this lol.<br>
<br>
Thanks,<br>
Curtis.<br>
<br>
<br>
<o:p></o:p></p>
<blockquote style="margin-top:5.0pt;margin-bottom:5.0pt">
<p class="MsoNormal" style="margin-bottom:12.0pt"><br>
___________________________________________________________________<br>
Kris Lindgren<br>
Senior Linux Systems Engineer<br>
GoDaddy<br>
<br>
On 10/12/16, 11:21 AM, "Curtis" <serverascode@gmail.com> wrote:<br>
<br>
Hi All,<br>
<br>
I've never used NFS with OpenStack before. But I am now with a small<br>
lab deployment with a few compute nodes.<br>
<br>
Is there anything special I should do with NFS and glance and nova?<br>
I<br>
remember there was an issue way back when of images being deleted<br>
b/c<br>
certain components weren't aware they are on NFS. I'm guessing that<br>
has changed but just wanted to check if there is anything specific I<br>
should be doing configuration-wise.<br>
<br>
I can't seem to find many examples of NFS usage...so feel free to<br>
point me to any documentation, blog posts, etc. I may have just<br>
missed<br>
it.<br>
<br>
Thanks,<br>
Curtis.<br>
<br>
_______________________________________________<br>
OpenStack-operators mailing list<br>
OpenStack-operators@lists.openstack.org<br>
<br>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators<br>
<br>
<o:p></o:p></p>
</blockquote>
<p class="MsoNormal"><br>
<br>
<br>
--<br>
Blog: serverascode.com<br>
<br>
_______________________________________________<br>
OpenStack-operators mailing list<br>
OpenStack-operators@lists.openstack.org<br>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators<o:p></o:p></p>
</blockquote>
<p class="MsoNormal" style="margin-bottom:12.0pt"><o:p> </o:p></p>
</blockquote>
<p class="MsoNormal"><br>
<br>
<br>
-- <br>
Blog: serverascode.com<br>
<br>
<br>
<br>
------------------------------<br>
<br>
Message: 16<br>
Date: Thu, 13 Oct 2016 08:24:16 +1300<br>
From: Xav Paice <xavpaice@gmail.com><br>
To: Lutz Birkhahn <lutz.birkhahn@noris.de><br>
Cc: "openstack-operators@lists.openstack.org"<br>
<openstack-operators@lists.openstack.org><br>
Subject: Re: [Openstack-operators] Ubuntu package for Octavia<br>
Message-ID:<br>
<CAMb5Lvru-USCW=GztsWxigccVeUGszYcYn-txk=R9ZqUicva8w@mail.gmail.com><br>
Content-Type: text/plain; charset="utf-8"<br>
<br>
I highly recommend looking in to Giftwrap for that, until there's UCA<br>
packages.<br>
<br>
The thing missing from the packages that Giftwrap will produce is init<br>
scripts, config file examples, and the various user and directory setup<br>
stuff. That's easy enough to put into config management or a separate<br>
package if you wanted to.<br>
<br>
On 13 October 2016 at 01:25, Lutz Birkhahn <lutz.birkhahn@noris.de> wrote:<br>
<br>
<br>
<o:p></o:p></p>
<blockquote style="margin-top:5.0pt;margin-bottom:5.0pt">
<p class="MsoNormal" style="margin-bottom:12.0pt">Has anyone seen Ubuntu packages for Octavia yet?<br>
<br>
We?re running Ubuntu 16.04 with Newton, but for whatever reason I can not<br>
find any Octavia package?<br>
<br>
So far I?ve only found in https://wiki.openstack.org/<br>
wiki/Neutron/LBaaS/HowToRun the following:<br>
<br>
Ubuntu Packages Setup: Install octavia with your favorite<br>
distribution: ?pip install octavia?<br>
<br>
That was not exactly what we would like to do in our production cloud?<br>
<br>
Thanks,<br>
<br>
/lutz<br>
_______________________________________________<br>
OpenStack-operators mailing list<br>
OpenStack-operators@lists.openstack.org<br>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators<br>
<br>
<o:p></o:p></p>
</blockquote>
<p class="MsoNormal">-------------- next part --------------<br>
An HTML attachment was scrubbed...<br>
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20161013/bd85871c/attachment-0001.html><br>
<br>
------------------------------<br>
<br>
Message: 17<br>
Date: Wed, 12 Oct 2016 16:02:55 -0400<br>
From: Warren Wang <warren@wangspeed.com><br>
To: Adam Kijak <adam.kijak@corp.ovh.com><br>
Cc: openstack-operators <openstack-operators@lists.openstack.org><br>
Subject: Re: [Openstack-operators] [openstack-operators][ceph][nova]<br>
How do you handle Nova on Ceph?<br>
Message-ID:<br>
<CAARB8+vKoT3WAtde_vst2x4cDAOiv+S4G4TDOVAq1NniR=4kLQ@mail.gmail.com><br>
Content-Type: text/plain; charset="utf-8"<br>
<br>
If fault domain is a concern, you can always split the cloud up into 3<br>
regions, each having a dedicate Ceph cluster. It isn't necessarily going to<br>
mean more hardware, just logical splits. This is kind of assuming that the<br>
network doesn't share the same fault domain though.<br>
<br>
Alternatively, you can split the hardware for the Ceph boxes into multiple<br>
clusters, and use multi backend Cinder to talk to the same set of<br>
hypervisors to use multiple Ceph clusters. We're doing that to migrate from<br>
one Ceph cluster to another. You can even mount a volume from each cluster<br>
into a single instance.<br>
<br>
Keep in mind that you don't really want to shrink a Ceph cluster too much.<br>
What's "too big"? You should keep growing so that the fault domains aren't<br>
too small (3 physical rack min), or you guarantee that the entire cluster<br>
stops if you lose network.<br>
<br>
Just my 2 cents,<br>
Warren<br>
<br>
On Wed, Oct 12, 2016 at 8:35 AM, Adam Kijak <adam.kijak@corp.ovh.com> wrote:<br>
<br>
<br>
<o:p></o:p></p>
<blockquote style="margin-top:5.0pt;margin-bottom:5.0pt">
<blockquote style="margin-top:5.0pt;margin-bottom:5.0pt">
<p class="MsoNormal">_______________________________________<br>
From: Abel Lopez <alopgeek@gmail.com><br>
Sent: Monday, October 10, 2016 9:57 PM<br>
To: Adam Kijak<br>
Cc: openstack-operators<br>
Subject: Re: [Openstack-operators] [openstack-operators][ceph][nova]<o:p></o:p></p>
</blockquote>
<p class="MsoNormal">How do you handle Nova on Ceph?<br>
<br>
<o:p></o:p></p>
<blockquote style="margin-top:5.0pt;margin-bottom:5.0pt">
<p class="MsoNormal"><br>
Have you thought about dedicated pools for cinder/nova and a separate<o:p></o:p></p>
</blockquote>
<p class="MsoNormal">pool for glance, and any other uses you might have?<br>
<br>
<o:p></o:p></p>
<blockquote style="margin-top:5.0pt;margin-bottom:5.0pt">
<p class="MsoNormal">You need to setup secrets on kvm, but you can have cinder creating<o:p></o:p></p>
</blockquote>
<p class="MsoNormal" style="margin-bottom:12.0pt">volumes from glance images quickly in different pools<br>
<br>
We already have separate pool for images, volumes and instances.<br>
Separate pools doesn't really split the failure domain though.<br>
Also AFAIK you can't set up multiple pools for instances in nova.conf,<br>
right?<br>
<br>
_______________________________________________<br>
OpenStack-operators mailing list<br>
OpenStack-operators@lists.openstack.org<br>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators<o:p></o:p></p>
</blockquote>
<p class="MsoNormal">-------------- next part --------------<br>
An HTML attachment was scrubbed...<br>
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20161012/656c5638/attachment-0001.html><br>
<br>
------------------------------<br>
<br>
Message: 18<br>
Date: Wed, 12 Oct 2016 13:46:01 -0700<br>
From: Clint Byrum <clint@fewbar.com><br>
To: openstack-operators <openstack-operators@lists.openstack.org><br>
Subject: Re: [Openstack-operators] [openstack-operators][ceph][nova]<br>
How do<span class="apple-tab-span"><o:p></o:p></span></p>
<p class="MsoNormal">you handle Nova on Ceph?<br>
Message-ID: <1476304977-sup-4753@fewbar.com><br>
Content-Type: text/plain; charset=UTF-8<br>
<br>
Excerpts from Adam Kijak's message of 2016-10-12 12:23:41 +0000:<br>
<br>
<o:p></o:p></p>
<blockquote style="margin-top:5.0pt;margin-bottom:5.0pt">
<blockquote style="margin-top:5.0pt;margin-bottom:5.0pt">
<p class="MsoNormal">________________________________________<br>
From: Xav Paice <xavpaice@gmail.com><br>
Sent: Monday, October 10, 2016 8:41 PM<br>
To: openstack-operators@lists.openstack.org<br>
Subject: Re: [Openstack-operators] [openstack-operators][ceph][nova] How do you handle Nova on Ceph?<br>
<br>
On Mon, 2016-10-10 at 13:29 +0000, Adam Kijak wrote:<br>
<br>
<o:p></o:p></p>
<blockquote style="margin-top:5.0pt;margin-bottom:5.0pt">
<p class="MsoNormal">Hello,<br>
<br>
We use a Ceph cluster for Nova (Glance and Cinder as well) and over<br>
time,<br>
more and more data is stored there. We can't keep the cluster so big<br>
because of<br>
Ceph's limitations. Sooner or later it needs to be closed for adding<br>
new<br>
instances, images and volumes. Not to mention it's a big failure<br>
domain.<o:p></o:p></p>
</blockquote>
<p class="MsoNormal"><br>
I'm really keen to hear more about those limitations.<o:p></o:p></p>
</blockquote>
<p class="MsoNormal"><br>
Basically it's all related to the failure domain ("blast radius") and risk management.<br>
Bigger Ceph cluster means more users.<o:p></o:p></p>
</blockquote>
<p class="MsoNormal"><br>
Are these risks well documented? Since Ceph is specifically designed<br>
_not_ to have the kind of large blast radius that one might see with<br>
say, a centralized SAN, I'm curious to hear what events trigger<br>
cluster-wide blasts.<br>
<br>
<br>
<o:p></o:p></p>
<blockquote style="margin-top:5.0pt;margin-bottom:5.0pt">
<p class="MsoNormal">Growing the Ceph cluster temporary slows it down, so many users will be affected.<o:p></o:p></p>
</blockquote>
<p class="MsoNormal"><br>
One might say that a Ceph cluster that can't be grown without the users<br>
noticing is an over-subscribed Ceph cluster. My understanding is that<br>
one is always advised to provision a certain amount of cluster capacity<br>
for growing and replicating to replaced drives.<br>
<br>
<br>
<o:p></o:p></p>
<blockquote style="margin-top:5.0pt;margin-bottom:5.0pt">
<p class="MsoNormal" style="margin-bottom:12.0pt">There are bugs in Ceph which can cause data corruption. It's rare, but when it happens
<br>
it can affect many (maybe all) users of the Ceph cluster.<o:p></o:p></p>
</blockquote>
<p class="MsoNormal"><br>
:(<br>
<br>
<br>
<br>
------------------------------<br>
<br>
Message: 19<br>
Date: Thu, 13 Oct 2016 13:37:58 +1100<br>
From: Blair Bethwaite <blair.bethwaite@gmail.com><br>
To: "openstack-oper." <openstack-operators@lists.openstack.org><br>
Subject: [Openstack-operators] Disable console for an instance<br>
Message-ID:<br>
<CA+z5DsyKjC6z4E+xOJv_a-UKbv+bX-+bt2mXDyp3c2e-bJbovA@mail.gmail.com><br>
Content-Type: text/plain; charset="utf-8"<br>
<br>
Hi all,<br>
<br>
Does anyone know whether there is a way to disable the novnc console on a<br>
per instance basis?<br>
<br>
Cheers,<br>
Blair<br>
-------------- next part --------------<br>
An HTML attachment was scrubbed...<br>
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20161013/afafbdc9/attachment-0001.html><br>
<br>
------------------------------<br>
<br>
Message: 20<br>
Date: Thu, 13 Oct 2016 06:12:59 +0000<br>
From: "Juvonen, Tomi (Nokia - FI/Espoo)" <tomi.juvonen@nokia.com><br>
To: "OpenStack-operators@lists.openstack.org"<br>
<OpenStack-operators@lists.openstack.org><br>
Subject: [Openstack-operators] host maintenance<br>
Message-ID:<br>
<AM4PR07MB15694E5C03F9D1E9C255957B85DC0@AM4PR07MB1569.eurprd07.prod.outlook.com><br>
<br>
Content-Type: text/plain; charset="us-ascii"<br>
<br>
Hi,<br>
<br>
Had the session in Austin summit for the maintenance:<br>
https://etherpad.openstack.org/p/AUS-ops-Nova-maint<br>
<br>
Now the discussion have gotten to a point that should start prototyping a service hosting the maintenance. For maintenance Nova could have a link to this new service, but no functionality for the maintenance should be placed in Nova project. Was working to
have this, but now looking better to have the prototype first:<br>
https://review.openstack.org/310510/<br>
<br>
<br>
<o:p></o:p></p>
<blockquote style="margin-top:5.0pt;margin-bottom:5.0pt">
<p class="MsoNormal">From the discussion over above review, the new service might have maintenance API connection point that links to host by utilizing "hostid" used in Nova and then there should be "tenant _id" specific end point to get what needed by project.
Something like:<o:p></o:p></p>
</blockquote>
<p class="MsoNormal">http://maintenancethingy/maintenance/{hostid}<br>
http://maintenancethingy/maintenance/{hostid}/{tenant_id}<br>
This will ensure tenant will not know details about host, but can get needed information about maintenance effecting to his instances.<br>
<br>
In Telco/NFV side we have OPNFV Doctor project that sets the requirements for this from that direction. I am personally interested in that part, but to have this to serve all operator requirements, it is best to bring this here.<br>
<br>
This could be further discussed in Barcelona and should get other people interested to help starting with this. Any suggestion for the Ops session?<br>
<br>
Looking forward,<br>
Tomi<br>
<br>
<br>
-------------- next part --------------<br>
An HTML attachment was scrubbed...<br>
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20161013/9f01d2ce/attachment-0001.html><br>
<br>
------------------------------<br>
<br>
Message: 21<br>
Date: Thu, 13 Oct 2016 15:19:51 +0800<br>
From: Tom Fifield <tom@openstack.org><br>
To: OpenStack Operators <openstack-operators@lists.openstack.org><br>
Subject: [Openstack-operators] Ops@Barcelona - Call for Moderators<br>
Message-ID: <ca8e7f82-b11e-ab3f-0d7c-9cc26719ebf0@openstack.org><br>
Content-Type: text/plain; charset=utf-8; format=flowed<br>
<br>
Hello all,<br>
<br>
The Ops design summit sessions are now listed on the schedule!<br>
<br>
<br>
https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=Ops+Summit%3A<br>
<br>
Please tick them and set up your summit app :)<br>
<br>
<br>
We are still looking for moderators for the following sessions:<br>
<br>
* OpenStack on Containers<br>
* Containers on OpenStack<br>
* Migration to OpenStack<br>
* Fleet Management<br>
* Feedback to PWG<br>
* Neutron pain points<br>
* Config Mgmt<br>
* HAProy, MySQL, Rabbit Tuning<br>
* Swift<br>
* Horizon<br>
* OpenStack CLI<br>
* Baremetal Deploy<br>
* OsOps<br>
* CI/CD workflows<br>
* Alt Deployment tech<br>
* ControlPlane Design(multi region)<br>
* Docs<br>
<br>
<br>
==> If you are interested in moderating a session, please<br>
<br>
* write your name in its etherpad (multiple moderators OK!)<br>
<br>
==> I'll be honest, I have no idea what some of the sessions are <br>
supposed to be, so also:<br>
<br>
* write a short description for the session so the agenda can be updated<br>
<br>
<br>
For those of you who want to know what it takes check out the <br>
Moderator's Guide: <br>
https://wiki.openstack.org/wiki/Operations/Meetups#Moderators_Guide & <br>
ask questions - we're here to help!<br>
<br>
<br>
<br>
Regards,<br>
<br>
<br>
Tom, on behalf of the Ops Meetups Team<br>
https://wiki.openstack.org/wiki/Ops_Meetups_Team<br>
<br>
<br>
<br>
------------------------------<br>
<br>
_______________________________________________<br>
OpenStack-operators mailing list<br>
OpenStack-operators@lists.openstack.org<br>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators<br>
<br>
<br>
End of OpenStack-operators Digest, Vol 72, Issue 11<br>
***************************************************<o:p></o:p></p>
</div>
</div>
</blockquote>
</div>
<p class="MsoNormal"><o:p> </o:p></p>
</div>
</div>
</div>
</body>
</html>