From william.josefson at gmail.com Sat Oct 1 04:02:11 2016 From: william.josefson at gmail.com (William Josefsson) Date: Sat, 1 Oct 2016 12:02:11 +0800 Subject: [Openstack-operators] Nova live-migration failing for RHEL7/CentOS7 VMs In-Reply-To: References: <881592D9-8BF6-45BA-9065-2D6D1B2BDB0A@overstock.com> <2A8572BE-A878-4668-93A8-6381CB47A441@ebay.com> Message-ID: Hi Dave, I use CentOS7.2 with the following package as per OpenStack official documentation, 'centos-release-openstack-liberty'. It works just fine, no major issues so far. I haven't tried snapshots so thanks for highlighting the support may not be there. Correct me if I'm mistaken, if I want to use the RHEV stack of KVM Qemu and other related packages, I should install: 'centos-release-qemu-ev' after installing 'centos-release-openstack-liberty'? thx will On Sat, Oct 1, 2016 at 3:04 AM, David Moreau Simard wrote: > If you are deploying on CentOS (with RDO?), you can enable the CentOS > Virtualization special interest group [1] repository. > > The repository contains qemu-kvm-ev >= 2.3 backported from RHEV. > It is recommended as the qemu-kvm version from base CentOS > repositories is not high enough and lacks some features (things like > snapshots, iirc). > > qemu-kvm >= 2.3 is actually a requirement in RDO >= Newton and we'll > bundle the CentOS virtualization SIG repository in our release > packages. > > [1]: https://wiki.centos.org/SpecialInterestGroup/Virtualization > > David Moreau Simard > Senior Software Engineer | Openstack RDO > > dmsimard = [irc, github, twitter] > > > On Thu, Sep 29, 2016 at 5:00 AM, William Josefsson > wrote: >> thanks everyone, I verified setting mem_stats_period_seconds = 0 as >> suggested by Corbin in nova.conf libvirt section, and then restarting >> openstack-nova-compute service and it works! >> >> While this seems to be a workable workaround I'm not sure what's the plans >> to permanently fix this in CentOS7.2? thx will >> >> >> >> On Wed, Sep 28, 2016 at 11:37 PM, Corbin Hendrickson >> wrote: >>> >>> Oh you can read it in the bug thread, but I forgot to mention, if you put >>> in your nova.conf under the libvirt section mem_stats_period_seconds = 0, >>> and restart nova on the destination (although i'd say just do it on both) it >>> will no longer hit the bug. I tested this a couple weeks back with success. >>> >>> Corbin Hendrickson >>> Endurance Cloud Development Lead - Manager >>> Cell: 801-400-0464 >>> >>> On Wed, Sep 28, 2016 at 9:34 AM, Corbin Hendrickson >>> wrote: >>>> >>>> It unfortunately is affecting virtually all of Redhat's latest qemu-kvm >>>> packages. The bug that was unintentionally introduced was done so in >>>> response to CVE-2016-5403 Qemu: virtio: unbounded memory allocation on host >>>> via guest leading to DoS. >>>> >>>> Late in the bug thread, they finally posted to a new bug created for the >>>> breaking of live migrate via Bug 1371943 - RHSA-2016-1756 breaks migration >>>> of instances. >>>> >>>> Based off their posts i've been following it's likely going to "hit the >>>> shelves" when RHEL 7.3 / CentOS 7.3 comes out. It does look like they are >>>> backporting it to all their versions of RHEL so that's good. >>>> >>>> But yes this does affect 2.3 as well. >>>> >>>> Corbin Hendrickson >>>> Endurance Cloud Development Lead - Manager >>>> Cell: 801-400-0464 >>>> >>>> On Wed, Sep 28, 2016 at 9:13 AM, Van Leeuwen, Robert >>>> wrote: >>>>> >>>>> > There is a bug in the following: >>>>> >>>>> > >>>>> >>>>> > qemu-kvm-1.5.3-105.el7_2.7 >>>>> >>>>> > qemu-img-1.5.3-105.el7_2.7 >>>>> >>>>> > qemu-kvm-common-1.5.3-105.el7_2.7 >>>>> >>>>> >>>>> >>>>> You might be better of using the RHEV qemu packages >>>>> >>>>> They are more recent (2.3) and have more features compiled into them. >>>>> >>>>> >>>>> >>>>> Cheers, >>>>> >>>>> Robert van Leeuwen >>>>> >>>>> >>>>> _______________________________________________ >>>>> OpenStack-operators mailing list >>>>> OpenStack-operators at lists.openstack.org >>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >>>>> >>>> >>> >> >> >> _______________________________________________ >> OpenStack-operators mailing list >> OpenStack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> From serverascode at gmail.com Sat Oct 1 17:47:56 2016 From: serverascode at gmail.com (Curtis) Date: Sat, 1 Oct 2016 11:47:56 -0600 Subject: [Openstack-operators] SDN for hybridcloud, does it *really* exist? In-Reply-To: <20160930141526.GV5776@csail.mit.edu> References: <20160930141526.GV5776@csail.mit.edu> Message-ID: On Fri, Sep 30, 2016 at 8:15 AM, Jonathan Proulx wrote: > > Starting to think refactoring my SDN world (currently just neutron > ml2/ovs inside OpenStack) in preparation for maybe finally lighting up > that second Region I've been threatening for the past year... > > Networking is always the hardest design challeng. Has anyone seen my > unicorn? I dream of something the first works with neutron of course > but also can extend the same network features to hardware out side > openstack and into random public cloud infrastructures through VM and/or > containerised gateways. Also I don't want to hire a whole networking > team to run it. > > I'm fairly certain this is still fantasy though I've heard various > vendors promise the earth and stars but I'd love to hear if anyone is > actually getting close to this in production systems and if so what > your experience has been like. > Do you want to have tenants be able to connect their openstack networks to another public clouds network using some kind of API? If so, what are your tenant networks? vlans? vxlan? Thanks, Curtis. > -Jon > > -- > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators -- Blog: serverascode.com From clint at fewbar.com Sat Oct 1 21:39:38 2016 From: clint at fewbar.com (Clint Byrum) Date: Sat, 01 Oct 2016 14:39:38 -0700 Subject: [Openstack-operators] SDN for hybridcloud, does it *really* exist? In-Reply-To: <20160930141526.GV5776@csail.mit.edu> References: <20160930141526.GV5776@csail.mit.edu> Message-ID: <1475357421-sup-138@fewbar.com> Excerpts from Jonathan Proulx's message of 2016-09-30 10:15:26 -0400: > > Starting to think refactoring my SDN world (currently just neutron > ml2/ovs inside OpenStack) in preparation for maybe finally lighting up > that second Region I've been threatening for the past year... > > Networking is always the hardest design challeng. Has anyone seen my > unicorn? I dream of something the first works with neutron of course > but also can extend the same network features to hardware out side > openstack and into random public cloud infrastructures through VM and/or > containerised gateways. Also I don't want to hire a whole networking > team to run it. > > I'm fairly certain this is still fantasy though I've heard various > vendors promise the earth and stars but I'd love to hear if anyone is > actually getting close to this in production systems and if so what > your experience has been like. > I know it's hard to believe, but this world was foretold long ago and what you want requires no special equipment or changes to OpenStack, just will-power. You can achieve it now if you can use operating system versions published in the last 5 or so years. The steps to do this: 1) Fix your apps to work via IPv6 2) Fix your internal users to have v6 native 3) Attach your VMs and containers to a provider network with v6 subnets 4) Use IPSec and firewalls for critical isolation. (What we use L2 separation for now) This is not complicated, but your SDN vendor probably doesn't want you to know that. You can still attach v4 addresses to your edge endpoints so they can talk to legacy stuff while you migrate. But the idea here is, if you control both ends of a connection, there is no reason you should still be using v4 except tradition. From corbin.hendrickson at endurance.com Sat Oct 1 21:54:33 2016 From: corbin.hendrickson at endurance.com (Corbin Hendrickson) Date: Sat, 1 Oct 2016 15:54:33 -0600 Subject: [Openstack-operators] Nova live-migration failing for RHEL7/CentOS7 VMs In-Reply-To: References: <881592D9-8BF6-45BA-9065-2D6D1B2BDB0A@overstock.com> <2A8572BE-A878-4668-93A8-6381CB47A441@ebay.com> Message-ID: Yes, you can find them here http://mirror.centos.org/centos/7/virt/x86_64/kvm-common/ And if memory serves me well it still does have the live migration issue, and yes live snapshots will not work on this version or even the latest 2.6 qemu will not work with live snapshots. I don't have the particular thread from the qemu mailing list handy but it seemed they were still trying to decide how they wanted to do them going forward as of a few months ago. Which is why if you are using 1.5.* qemu you will need to set in your nova.conf under the section [workarounds]: disable_libvirt_livesnapshot = False At which point if you invoke a nova image-create (aka snapshot) and the instance is running it will perform a snapshot. Keep in mind however, the docs do state that live snapshots fail intermittently under load. So it is entirely a better option to shutdown the guest and perform the nova image-create as opposed to doing it live if you can get away with it. Corbin Hendrickson Endurance Cloud Development Lead - Manager Cell: 801-400-0464 On Fri, Sep 30, 2016 at 10:02 PM, William Josefsson < william.josefson at gmail.com> wrote: > Hi Dave, I use CentOS7.2 with the following package as per OpenStack > official documentation, 'centos-release-openstack-liberty'. It works > just fine, no major issues so far. I haven't tried snapshots so thanks > for highlighting the support may not be there. > > Correct me if I'm mistaken, if I want to use the RHEV stack of KVM > Qemu and other related packages, I should install: > 'centos-release-qemu-ev' after installing > 'centos-release-openstack-liberty'? thx will > > On Sat, Oct 1, 2016 at 3:04 AM, David Moreau Simard > wrote: > > If you are deploying on CentOS (with RDO?), you can enable the CentOS > > Virtualization special interest group [1] repository. > > > > The repository contains qemu-kvm-ev >= 2.3 backported from RHEV. > > It is recommended as the qemu-kvm version from base CentOS > > repositories is not high enough and lacks some features (things like > > snapshots, iirc). > > > > qemu-kvm >= 2.3 is actually a requirement in RDO >= Newton and we'll > > bundle the CentOS virtualization SIG repository in our release > > packages. > > > > [1]: https://wiki.centos.org/SpecialInterestGroup/Virtualization > > > > David Moreau Simard > > Senior Software Engineer | Openstack RDO > > > > dmsimard = [irc, github, twitter] > > > > > > On Thu, Sep 29, 2016 at 5:00 AM, William Josefsson > > wrote: > >> thanks everyone, I verified setting mem_stats_period_seconds = 0 as > >> suggested by Corbin in nova.conf libvirt section, and then restarting > >> openstack-nova-compute service and it works! > >> > >> While this seems to be a workable workaround I'm not sure what's the > plans > >> to permanently fix this in CentOS7.2? thx will > >> > >> > >> > >> On Wed, Sep 28, 2016 at 11:37 PM, Corbin Hendrickson > >> wrote: > >>> > >>> Oh you can read it in the bug thread, but I forgot to mention, if you > put > >>> in your nova.conf under the libvirt section mem_stats_period_seconds = > 0, > >>> and restart nova on the destination (although i'd say just do it on > both) it > >>> will no longer hit the bug. I tested this a couple weeks back with > success. > >>> > >>> Corbin Hendrickson > >>> Endurance Cloud Development Lead - Manager > >>> Cell: 801-400-0464 > >>> > >>> On Wed, Sep 28, 2016 at 9:34 AM, Corbin Hendrickson > >>> wrote: > >>>> > >>>> It unfortunately is affecting virtually all of Redhat's latest > qemu-kvm > >>>> packages. The bug that was unintentionally introduced was done so in > >>>> response to CVE-2016-5403 Qemu: virtio: unbounded memory allocation > on host > >>>> via guest leading to DoS. > >>>> > >>>> Late in the bug thread, they finally posted to a new bug created for > the > >>>> breaking of live migrate via Bug 1371943 - RHSA-2016-1756 breaks > migration > >>>> of instances. > >>>> > >>>> Based off their posts i've been following it's likely going to "hit > the > >>>> shelves" when RHEL 7.3 / CentOS 7.3 comes out. It does look like they > are > >>>> backporting it to all their versions of RHEL so that's good. > >>>> > >>>> But yes this does affect 2.3 as well. > >>>> > >>>> Corbin Hendrickson > >>>> Endurance Cloud Development Lead - Manager > >>>> Cell: 801-400-0464 > >>>> > >>>> On Wed, Sep 28, 2016 at 9:13 AM, Van Leeuwen, Robert > >>>> wrote: > >>>>> > >>>>> > There is a bug in the following: > >>>>> > >>>>> > > >>>>> > >>>>> > qemu-kvm-1.5.3-105.el7_2.7 > >>>>> > >>>>> > qemu-img-1.5.3-105.el7_2.7 > >>>>> > >>>>> > qemu-kvm-common-1.5.3-105.el7_2.7 > >>>>> > >>>>> > >>>>> > >>>>> You might be better of using the RHEV qemu packages > >>>>> > >>>>> They are more recent (2.3) and have more features compiled into them. > >>>>> > >>>>> > >>>>> > >>>>> Cheers, > >>>>> > >>>>> Robert van Leeuwen > >>>>> > >>>>> > >>>>> _______________________________________________ > >>>>> OpenStack-operators mailing list > >>>>> OpenStack-operators at lists.openstack.org > >>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/ > openstack-operators > >>>>> > >>>> > >>> > >> > >> > >> _______________________________________________ > >> OpenStack-operators mailing list > >> OpenStack-operators at lists.openstack.org > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From william.josefson at gmail.com Sun Oct 2 12:21:56 2016 From: william.josefson at gmail.com (William Josefsson) Date: Sun, 2 Oct 2016 20:21:56 +0800 Subject: [Openstack-operators] Nova live-migration failing for RHEL7/CentOS7 VMs In-Reply-To: References: <881592D9-8BF6-45BA-9065-2D6D1B2BDB0A@overstock.com> <2A8572BE-A878-4668-93A8-6381CB47A441@ebay.com> Message-ID: Many thanks Corbin! I should go ahead test and install 'centos-release-openstack-liberty' followed by 'centos-release-qemu-ev' in my dev environment. To get snapshot support working with qemu-kvm 10:1.5.3-105.el7_2.7 (offline recommended), I will also on my Compute nodes set in nova.conf: [workarounds] disable_libvirt_livesnapshot = False On the other hand, do you folks generally use the RHEV-QEMU stack in your production deployments, or do you see any major benefits doing so instead of using the default qemu that comes with CentOS7.2? thx will On Sun, Oct 2, 2016 at 5:54 AM, Corbin Hendrickson wrote: > Yes, you can find them here > http://mirror.centos.org/centos/7/virt/x86_64/kvm-common/ And if memory > serves me well it still does have the live migration issue, and yes live > snapshots will not work on this version or even the latest 2.6 qemu will not > work with live snapshots. I don't have the particular thread from the qemu > mailing list handy but it seemed they were still trying to decide how they > wanted to do them going forward as of a few months ago. Which is why if you > are using 1.5.* qemu you will need to set in your nova.conf under the > section [workarounds]: disable_libvirt_livesnapshot = False At which > point if you invoke a nova image-create (aka snapshot) and the instance is > running it will perform a snapshot. Keep in mind however, the docs do state > that live snapshots fail intermittently under load. So it is entirely a > better option to shutdown the guest and perform the nova image-create as > opposed to doing it live if you can get away with it. > > Corbin Hendrickson > Endurance Cloud Development Lead - Manager > Cell: 801-400-0464 > > On Fri, Sep 30, 2016 at 10:02 PM, William Josefsson > wrote: >> >> Hi Dave, I use CentOS7.2 with the following package as per OpenStack >> official documentation, 'centos-release-openstack-liberty'. It works >> just fine, no major issues so far. I haven't tried snapshots so thanks >> for highlighting the support may not be there. >> >> Correct me if I'm mistaken, if I want to use the RHEV stack of KVM >> Qemu and other related packages, I should install: >> 'centos-release-qemu-ev' after installing >> 'centos-release-openstack-liberty'? thx will >> >> On Sat, Oct 1, 2016 at 3:04 AM, David Moreau Simard >> wrote: >> > If you are deploying on CentOS (with RDO?), you can enable the CentOS >> > Virtualization special interest group [1] repository. >> > >> > The repository contains qemu-kvm-ev >= 2.3 backported from RHEV. >> > It is recommended as the qemu-kvm version from base CentOS >> > repositories is not high enough and lacks some features (things like >> > snapshots, iirc). >> > >> > qemu-kvm >= 2.3 is actually a requirement in RDO >= Newton and we'll >> > bundle the CentOS virtualization SIG repository in our release >> > packages. >> > >> > [1]: https://wiki.centos.org/SpecialInterestGroup/Virtualization >> > >> > David Moreau Simard >> > Senior Software Engineer | Openstack RDO >> > >> > dmsimard = [irc, github, twitter] >> > >> > >> > On Thu, Sep 29, 2016 at 5:00 AM, William Josefsson >> > wrote: >> >> thanks everyone, I verified setting mem_stats_period_seconds = 0 as >> >> suggested by Corbin in nova.conf libvirt section, and then restarting >> >> openstack-nova-compute service and it works! >> >> >> >> While this seems to be a workable workaround I'm not sure what's the >> >> plans >> >> to permanently fix this in CentOS7.2? thx will >> >> >> >> >> >> >> >> On Wed, Sep 28, 2016 at 11:37 PM, Corbin Hendrickson >> >> wrote: >> >>> >> >>> Oh you can read it in the bug thread, but I forgot to mention, if you >> >>> put >> >>> in your nova.conf under the libvirt section mem_stats_period_seconds = >> >>> 0, >> >>> and restart nova on the destination (although i'd say just do it on >> >>> both) it >> >>> will no longer hit the bug. I tested this a couple weeks back with >> >>> success. >> >>> >> >>> Corbin Hendrickson >> >>> Endurance Cloud Development Lead - Manager >> >>> Cell: 801-400-0464 >> >>> >> >>> On Wed, Sep 28, 2016 at 9:34 AM, Corbin Hendrickson >> >>> wrote: >> >>>> >> >>>> It unfortunately is affecting virtually all of Redhat's latest >> >>>> qemu-kvm >> >>>> packages. The bug that was unintentionally introduced was done so in >> >>>> response to CVE-2016-5403 Qemu: virtio: unbounded memory allocation >> >>>> on host >> >>>> via guest leading to DoS. >> >>>> >> >>>> Late in the bug thread, they finally posted to a new bug created for >> >>>> the >> >>>> breaking of live migrate via Bug 1371943 - RHSA-2016-1756 breaks >> >>>> migration >> >>>> of instances. >> >>>> >> >>>> Based off their posts i've been following it's likely going to "hit >> >>>> the >> >>>> shelves" when RHEL 7.3 / CentOS 7.3 comes out. It does look like they >> >>>> are >> >>>> backporting it to all their versions of RHEL so that's good. >> >>>> >> >>>> But yes this does affect 2.3 as well. >> >>>> >> >>>> Corbin Hendrickson >> >>>> Endurance Cloud Development Lead - Manager >> >>>> Cell: 801-400-0464 >> >>>> >> >>>> On Wed, Sep 28, 2016 at 9:13 AM, Van Leeuwen, Robert >> >>>> wrote: >> >>>>> >> >>>>> > There is a bug in the following: >> >>>>> >> >>>>> > >> >>>>> >> >>>>> > qemu-kvm-1.5.3-105.el7_2.7 >> >>>>> >> >>>>> > qemu-img-1.5.3-105.el7_2.7 >> >>>>> >> >>>>> > qemu-kvm-common-1.5.3-105.el7_2.7 >> >>>>> >> >>>>> >> >>>>> >> >>>>> You might be better of using the RHEV qemu packages >> >>>>> >> >>>>> They are more recent (2.3) and have more features compiled into >> >>>>> them. >> >>>>> >> >>>>> >> >>>>> >> >>>>> Cheers, >> >>>>> >> >>>>> Robert van Leeuwen >> >>>>> >> >>>>> >> >>>>> _______________________________________________ >> >>>>> OpenStack-operators mailing list >> >>>>> OpenStack-operators at lists.openstack.org >> >>>>> >> >>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> >>>>> >> >>>> >> >>> >> >> >> >> >> >> _______________________________________________ >> >> OpenStack-operators mailing list >> >> OpenStack-operators at lists.openstack.org >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> >> > > From serverascode at gmail.com Sun Oct 2 22:22:52 2016 From: serverascode at gmail.com (Curtis) Date: Sun, 2 Oct 2016 16:22:52 -0600 Subject: [Openstack-operators] SDN for hybridcloud, does it *really* exist? In-Reply-To: <1475357421-sup-138@fewbar.com> References: <20160930141526.GV5776@csail.mit.edu> <1475357421-sup-138@fewbar.com> Message-ID: On Sat, Oct 1, 2016 at 3:39 PM, Clint Byrum wrote: > Excerpts from Jonathan Proulx's message of 2016-09-30 10:15:26 -0400: >> >> Starting to think refactoring my SDN world (currently just neutron >> ml2/ovs inside OpenStack) in preparation for maybe finally lighting up >> that second Region I've been threatening for the past year... >> >> Networking is always the hardest design challeng. Has anyone seen my >> unicorn? I dream of something the first works with neutron of course >> but also can extend the same network features to hardware out side >> openstack and into random public cloud infrastructures through VM and/or >> containerised gateways. Also I don't want to hire a whole networking >> team to run it. >> >> I'm fairly certain this is still fantasy though I've heard various >> vendors promise the earth and stars but I'd love to hear if anyone is >> actually getting close to this in production systems and if so what >> your experience has been like. >> > > I know it's hard to believe, but this world was foretold long ago and > what you want requires no special equipment or changes to OpenStack, > just will-power. You can achieve it now if you can use operating system > versions published in the last 5 or so years. > > The steps to do this: > > 1) Fix your apps to work via IPv6 > 2) Fix your internal users to have v6 native > 3) Attach your VMs and containers to a provider network with v6 subnets > 4) Use IPSec and firewalls for critical isolation. (What we use L2 > separation for now) > > This is not complicated, but your SDN vendor probably doesn't want you > to know that. You can still attach v4 addresses to your edge endpoints > so they can talk to legacy stuff while you migrate. But the idea here > is, if you control both ends of a connection, there is no reason you > should still be using v4 except tradition. It would be great for everyone to use ipv6. However, I'm not sure what major public clouds support it. For example I'm pretty sure AWS does not (maybe for some services). I'd love to be wrong on that. :) > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators -- Blog: serverascode.com From clint at fewbar.com Mon Oct 3 00:06:02 2016 From: clint at fewbar.com (Clint Byrum) Date: Sun, 02 Oct 2016 17:06:02 -0700 Subject: [Openstack-operators] SDN for hybridcloud, does it *really* exist? In-Reply-To: References: <20160930141526.GV5776@csail.mit.edu> <1475357421-sup-138@fewbar.com> Message-ID: <1475453112-sup-3491@fewbar.com> Excerpts from Curtis's message of 2016-10-02 16:22:52 -0600: > On Sat, Oct 1, 2016 at 3:39 PM, Clint Byrum wrote: > > Excerpts from Jonathan Proulx's message of 2016-09-30 10:15:26 -0400: > >> > >> Starting to think refactoring my SDN world (currently just neutron > >> ml2/ovs inside OpenStack) in preparation for maybe finally lighting up > >> that second Region I've been threatening for the past year... > >> > >> Networking is always the hardest design challeng. Has anyone seen my > >> unicorn? I dream of something the first works with neutron of course > >> but also can extend the same network features to hardware out side > >> openstack and into random public cloud infrastructures through VM and/or > >> containerised gateways. Also I don't want to hire a whole networking > >> team to run it. > >> > >> I'm fairly certain this is still fantasy though I've heard various > >> vendors promise the earth and stars but I'd love to hear if anyone is > >> actually getting close to this in production systems and if so what > >> your experience has been like. > >> > > > > I know it's hard to believe, but this world was foretold long ago and > > what you want requires no special equipment or changes to OpenStack, > > just will-power. You can achieve it now if you can use operating system > > versions published in the last 5 or so years. > > > > The steps to do this: > > > > 1) Fix your apps to work via IPv6 > > 2) Fix your internal users to have v6 native > > 3) Attach your VMs and containers to a provider network with v6 subnets > > 4) Use IPSec and firewalls for critical isolation. (What we use L2 > > separation for now) > > > > This is not complicated, but your SDN vendor probably doesn't want you > > to know that. You can still attach v4 addresses to your edge endpoints > > so they can talk to legacy stuff while you migrate. But the idea here > > is, if you control both ends of a connection, there is no reason you > > should still be using v4 except tradition. > > It would be great for everyone to use ipv6. However, I'm not sure what > major public clouds support it. For example I'm pretty sure AWS does > not (maybe for some services). I'd love to be wrong on that. :) > IPv6 is already rolling out on Amazon [1] (ELB also has had IPv6 for quite some time), though right now that only helps you for egress traffic from your own cloud (EC2 won't give your instances a native IPv6 address). You can still use a tunnel provider to use ipv6 on AWS, just like any other hosting provider. However, another idea is, take your business elsewhere, to a provider that _will_ give you IPv6, and will also run a cloud that is aligned with your interests as an OpenStack user [2]. [1] https://aws.amazon.com/blogs/aws/now-available-ipv6-support-for-amazon-s3/ [2] https://www.openstack.org/marketplace/public-clouds/ From mkassawara at gmail.com Mon Oct 3 04:15:20 2016 From: mkassawara at gmail.com (Matt Kassawara) Date: Sun, 2 Oct 2016 22:15:20 -0600 Subject: [Openstack-operators] Reserve an external network for 1 tenant In-Reply-To: References: Message-ID: How are you creating the provider (external) network? On Thu, Sep 29, 2016 at 6:01 AM, Saverio Proto wrote: > Hello, > > Context: > - openstack liberty > - ubuntu trusty > - neutron networking with vxlan tunnels > > we have been running Openstack with a single external network so far. > > Now we have a specific VLAN in our datacenter with some hardware boxes > that need a connection to a specific tenant network. > > To make this possible I changed the configuration of the network node > to support multiple external networks. I am able to create a router > and set as external network the new physnet where the boxes are. > > Everything looks nice except that all the projects can benefit from > this new external network. In any tenant I can create a router, and > set the external network and connect to the boxes. I cannot restrict > it to a specific tenant. > > I found this piece of documentation: > > https://wiki.openstack.org/wiki/Neutron/sharing-model- > for-external-networks > > So it looks like it is impossible to have a flat external network > reserved for 1 specific tenant. > > I also tried to follow this documentation: > http://docs.openstack.org/liberty/networking-guide/adv- > config-network-rbac.html > > But it does not specify if it is possible to specify a policy for an > external network to limit the sharing. > > It did not work for me so I guess this does not work when the secret > network I want to create is external. > > There is an action --action access_as_external that is not clear to me. > > Also look like this feature is evolving in Newton: > http://docs.openstack.org/draft/networking-guide/config-rbac.html > > Anyone has tried similar setups ? What is the minimum openstack > version to get this done ? > > thank you > > Saverio > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kevin at benton.pub Mon Oct 3 05:00:58 2016 From: kevin at benton.pub (Kevin Benton) Date: Sun, 2 Oct 2016 22:00:58 -0700 Subject: [Openstack-operators] Reserve an external network for 1 tenant In-Reply-To: References: Message-ID: You will need mitaka to get an external network that is only available to specific tenants. That is what the 'access_as_external' you identified does. Search for the section "Allowing a network to be used as an external network" in http://docs.openstack.org/mitaka/networking-guide/config-rbac.html. On Thu, Sep 29, 2016 at 5:01 AM, Saverio Proto wrote: > Hello, > > Context: > - openstack liberty > - ubuntu trusty > - neutron networking with vxlan tunnels > > we have been running Openstack with a single external network so far. > > Now we have a specific VLAN in our datacenter with some hardware boxes > that need a connection to a specific tenant network. > > To make this possible I changed the configuration of the network node > to support multiple external networks. I am able to create a router > and set as external network the new physnet where the boxes are. > > Everything looks nice except that all the projects can benefit from > this new external network. In any tenant I can create a router, and > set the external network and connect to the boxes. I cannot restrict > it to a specific tenant. > > I found this piece of documentation: > > https://wiki.openstack.org/wiki/Neutron/sharing-model- > for-external-networks > > So it looks like it is impossible to have a flat external network > reserved for 1 specific tenant. > > I also tried to follow this documentation: > http://docs.openstack.org/liberty/networking-guide/adv- > config-network-rbac.html > > But it does not specify if it is possible to specify a policy for an > external network to limit the sharing. > > It did not work for me so I guess this does not work when the secret > network I want to create is external. > > There is an action --action access_as_external that is not clear to me. > > Also look like this feature is evolving in Newton: > http://docs.openstack.org/draft/networking-guide/config-rbac.html > > Anyone has tried similar setups ? What is the minimum openstack > version to get this done ? > > thank you > > Saverio > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zioproto at gmail.com Mon Oct 3 07:11:28 2016 From: zioproto at gmail.com (Saverio Proto) Date: Mon, 3 Oct 2016 09:11:28 +0200 Subject: [Openstack-operators] Reserve an external network for 1 tenant In-Reply-To: References: Message-ID: Hello Matt, first of all in the file : plugins/ml2/openvswitch_agent.ini you need to have bridge mappings, in my case for example: bridge_mappings = physnet1:br-eth3,physnet2:br-eth4 this will define what physnet1 means in the openstack context. To create the external network I do: openstack network create --no-share --project uuid --provider-physical-network physnet2 --provider-network-type flat --external NETWORKNAME Of course the --no-share is useless because being the network external it will be shared by default. Saverio 2016-10-03 6:15 GMT+02:00 Matt Kassawara : > How are you creating the provider (external) network? > > On Thu, Sep 29, 2016 at 6:01 AM, Saverio Proto wrote: >> >> Hello, >> >> Context: >> - openstack liberty >> - ubuntu trusty >> - neutron networking with vxlan tunnels >> >> we have been running Openstack with a single external network so far. >> >> Now we have a specific VLAN in our datacenter with some hardware boxes >> that need a connection to a specific tenant network. >> >> To make this possible I changed the configuration of the network node >> to support multiple external networks. I am able to create a router >> and set as external network the new physnet where the boxes are. >> >> Everything looks nice except that all the projects can benefit from >> this new external network. In any tenant I can create a router, and >> set the external network and connect to the boxes. I cannot restrict >> it to a specific tenant. >> >> I found this piece of documentation: >> >> >> https://wiki.openstack.org/wiki/Neutron/sharing-model-for-external-networks >> >> So it looks like it is impossible to have a flat external network >> reserved for 1 specific tenant. >> >> I also tried to follow this documentation: >> >> http://docs.openstack.org/liberty/networking-guide/adv-config-network-rbac.html >> >> But it does not specify if it is possible to specify a policy for an >> external network to limit the sharing. >> >> It did not work for me so I guess this does not work when the secret >> network I want to create is external. >> >> There is an action --action access_as_external that is not clear to me. >> >> Also look like this feature is evolving in Newton: >> http://docs.openstack.org/draft/networking-guide/config-rbac.html >> >> Anyone has tried similar setups ? What is the minimum openstack >> version to get this done ? >> >> thank you >> >> Saverio >> >> _______________________________________________ >> OpenStack-operators mailing list >> OpenStack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > From zioproto at gmail.com Mon Oct 3 07:16:33 2016 From: zioproto at gmail.com (Saverio Proto) Date: Mon, 3 Oct 2016 09:16:33 +0200 Subject: [Openstack-operators] Reserve an external network for 1 tenant In-Reply-To: References: Message-ID: Sorry I missed the Mailing List in the Cc: Saverio 2016-10-03 9:15 GMT+02:00 Saverio Proto : > Hello Kevin, > > thanks for your answer. > > so far I managed to make the network not shared just by making it not > external. Because I dont need NAT and floatingips this will match my > use case. > > As an admin I create the network like: > openstack network create --no-share --project user_project_uuid > --provider-physical-network physnet2 --provider-network-type flat > NETWORKNAME > > In this way only the users that belong to user_project_uuid see the > network with 'list' and 'show' operations. > > I still have to test carefully if Openstack will allow isolation to > brake in case a user or admin tries to create more networks mapped to > physnet2 > > I hope I will upgrade to Mitaka as soon as possible. > > thank you > > Saverio > > > > > > 2016-10-03 7:00 GMT+02:00 Kevin Benton : >> You will need mitaka to get an external network that is only available to >> specific tenants. That is what the 'access_as_external' you identified does. >> >> Search for the section "Allowing a network to be used as an external >> network" in >> http://docs.openstack.org/mitaka/networking-guide/config-rbac.html. >> >> On Thu, Sep 29, 2016 at 5:01 AM, Saverio Proto wrote: >>> >>> Hello, >>> >>> Context: >>> - openstack liberty >>> - ubuntu trusty >>> - neutron networking with vxlan tunnels >>> >>> we have been running Openstack with a single external network so far. >>> >>> Now we have a specific VLAN in our datacenter with some hardware boxes >>> that need a connection to a specific tenant network. >>> >>> To make this possible I changed the configuration of the network node >>> to support multiple external networks. I am able to create a router >>> and set as external network the new physnet where the boxes are. >>> >>> Everything looks nice except that all the projects can benefit from >>> this new external network. In any tenant I can create a router, and >>> set the external network and connect to the boxes. I cannot restrict >>> it to a specific tenant. >>> >>> I found this piece of documentation: >>> >>> >>> https://wiki.openstack.org/wiki/Neutron/sharing-model-for-external-networks >>> >>> So it looks like it is impossible to have a flat external network >>> reserved for 1 specific tenant. >>> >>> I also tried to follow this documentation: >>> >>> http://docs.openstack.org/liberty/networking-guide/adv-config-network-rbac.html >>> >>> But it does not specify if it is possible to specify a policy for an >>> external network to limit the sharing. >>> >>> It did not work for me so I guess this does not work when the secret >>> network I want to create is external. >>> >>> There is an action --action access_as_external that is not clear to me. >>> >>> Also look like this feature is evolving in Newton: >>> http://docs.openstack.org/draft/networking-guide/config-rbac.html >>> >>> Anyone has tried similar setups ? What is the minimum openstack >>> version to get this done ? >>> >>> thank you >>> >>> Saverio >>> >>> _______________________________________________ >>> OpenStack-operators mailing list >>> OpenStack-operators at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> >> From mkassawara at gmail.com Mon Oct 3 12:31:47 2016 From: mkassawara at gmail.com (Matt Kassawara) Date: Mon, 3 Oct 2016 06:31:47 -0600 Subject: [Openstack-operators] Reserve an external network for 1 tenant In-Reply-To: References: Message-ID: Alternatively, you could drop the 'external' attribute and attach your instances directly to the provider network (no routers or private networks). On Mon, Oct 3, 2016 at 1:16 AM, Saverio Proto wrote: > Sorry I missed the Mailing List in the Cc: > Saverio > > 2016-10-03 9:15 GMT+02:00 Saverio Proto : > > Hello Kevin, > > > > thanks for your answer. > > > > so far I managed to make the network not shared just by making it not > > external. Because I dont need NAT and floatingips this will match my > > use case. > > > > As an admin I create the network like: > > openstack network create --no-share --project user_project_uuid > > --provider-physical-network physnet2 --provider-network-type flat > > NETWORKNAME > > > > In this way only the users that belong to user_project_uuid see the > > network with 'list' and 'show' operations. > > > > I still have to test carefully if Openstack will allow isolation to > > brake in case a user or admin tries to create more networks mapped to > > physnet2 > > > > I hope I will upgrade to Mitaka as soon as possible. > > > > thank you > > > > Saverio > > > > > > > > > > > > 2016-10-03 7:00 GMT+02:00 Kevin Benton : > >> You will need mitaka to get an external network that is only available > to > >> specific tenants. That is what the 'access_as_external' you identified > does. > >> > >> Search for the section "Allowing a network to be used as an external > >> network" in > >> http://docs.openstack.org/mitaka/networking-guide/config-rbac.html. > >> > >> On Thu, Sep 29, 2016 at 5:01 AM, Saverio Proto > wrote: > >>> > >>> Hello, > >>> > >>> Context: > >>> - openstack liberty > >>> - ubuntu trusty > >>> - neutron networking with vxlan tunnels > >>> > >>> we have been running Openstack with a single external network so far. > >>> > >>> Now we have a specific VLAN in our datacenter with some hardware boxes > >>> that need a connection to a specific tenant network. > >>> > >>> To make this possible I changed the configuration of the network node > >>> to support multiple external networks. I am able to create a router > >>> and set as external network the new physnet where the boxes are. > >>> > >>> Everything looks nice except that all the projects can benefit from > >>> this new external network. In any tenant I can create a router, and > >>> set the external network and connect to the boxes. I cannot restrict > >>> it to a specific tenant. > >>> > >>> I found this piece of documentation: > >>> > >>> > >>> https://wiki.openstack.org/wiki/Neutron/sharing-model- > for-external-networks > >>> > >>> So it looks like it is impossible to have a flat external network > >>> reserved for 1 specific tenant. > >>> > >>> I also tried to follow this documentation: > >>> > >>> http://docs.openstack.org/liberty/networking-guide/adv- > config-network-rbac.html > >>> > >>> But it does not specify if it is possible to specify a policy for an > >>> external network to limit the sharing. > >>> > >>> It did not work for me so I guess this does not work when the secret > >>> network I want to create is external. > >>> > >>> There is an action --action access_as_external that is not clear to me. > >>> > >>> Also look like this feature is evolving in Newton: > >>> http://docs.openstack.org/draft/networking-guide/config-rbac.html > >>> > >>> Anyone has tried similar setups ? What is the minimum openstack > >>> version to get this done ? > >>> > >>> thank you > >>> > >>> Saverio > >>> > >>> _______________________________________________ > >>> OpenStack-operators mailing list > >>> OpenStack-operators at lists.openstack.org > >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/ > openstack-operators > >> > >> > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mvanwink at rackspace.com Mon Oct 3 13:51:26 2016 From: mvanwink at rackspace.com (Matt Van Winkle) Date: Mon, 3 Oct 2016 13:51:26 +0000 Subject: [Openstack-operators] Ops Meetups Team - Update on Schedule planning Message-ID: <99993748-EEC5-4C1F-8292-015CA03C33FE@rackspace.com> Hi Ops Meetups Team, Friday, a few of us met to discuss the current version of the agenda [1]. We made a few tweaks and have a few items to discuss at the next meeting. Namely: - We removed the scheduled session for lightning talks because room for them is in the Ops war stories track - We added a slot to have all the WG leaders introduce their teams, what they are working on and what they hope to accomplish as a team in BCN. A pending action is open to reach out to them - We swapped the AUC Update and Swift sessions from the first version to give those interested in various storage topics to attend both - We need to confirm that the NFV sessions in this calendar are duplicates now that the WG has had sessions scheduled through the separate WG process. If so, this will free up two slots in the morning - There are 3 or so empty slots that we can still look at, but the team wondered if leaving the ones late in the day open might be good for overflow/follow up from earlier sessions We can discuss all of this in more detail in the meeting tomorrow. [2] I will try to make notes in the agenda. Thanks! VW [1] https://docs.google.com/spreadsheets/d/1EUSYMs3GfglnD8yfFaAXWhLe0F5y9hCUKqCYe0Vp1oA/edit#gid=803513477 [2] https://wiki.openstack.org/wiki/Ops_Meetups_Team#Meeting_Information From jon at csail.mit.edu Mon Oct 3 15:16:03 2016 From: jon at csail.mit.edu (Jonathan Proulx) Date: Mon, 3 Oct 2016 11:16:03 -0400 Subject: [Openstack-operators] SDN for hybridcloud, does it *really* exist? In-Reply-To: <1475357421-sup-138@fewbar.com> References: <20160930141526.GV5776@csail.mit.edu> <1475357421-sup-138@fewbar.com> Message-ID: <20161003151603.GC5776@csail.mit.edu> On Sat, Oct 01, 2016 at 02:39:38PM -0700, Clint Byrum wrote: :I know it's hard to believe, but this world was foretold long ago and :what you want requires no special equipment or changes to OpenStack, :just will-power. You can achieve it now if you can use operating system :versions published in the last 5 or so years. : :The steps to do this: : :1) Fix your apps to work via IPv6 :2) Fix your internal users to have v6 native :3) Attach your VMs and containers to a provider network with v6 subnets :4) Use IPSec and firewalls for critical isolation. (What we use L2 : separation for now) That *is* hard to belive :) IPv6 has been coming soon since I started in tech a very long time ago ... I will consider that but I have a diverse set of users I don't control. I *may* be able to apply pressure in the if you really need this then do the right thing, but I probably still want a v4 solution in my pocket. -Jon From jon at csail.mit.edu Mon Oct 3 15:20:56 2016 From: jon at csail.mit.edu (Jonathan Proulx) Date: Mon, 3 Oct 2016 11:20:56 -0400 Subject: [Openstack-operators] SDN for hybridcloud, does it *really* exist? In-Reply-To: References: <20160930141526.GV5776@csail.mit.edu> Message-ID: <20161003152056.GD5776@csail.mit.edu> On Sat, Oct 01, 2016 at 11:47:56AM -0600, Curtis wrote: :On Fri, Sep 30, 2016 at 8:15 AM, Jonathan Proulx wrote: :> :> Starting to think refactoring my SDN world (currently just neutron :> ml2/ovs inside OpenStack) in preparation for maybe finally lighting up :> that second Region I've been threatening for the past year... :> :> Networking is always the hardest design challeng. Has anyone seen my :> unicorn? I dream of something the first works with neutron of course :> but also can extend the same network features to hardware out side :> openstack and into random public cloud infrastructures through VM and/or :> containerised gateways. Also I don't want to hire a whole networking :> team to run it. :> :> I'm fairly certain this is still fantasy though I've heard various :> vendors promise the earth and stars but I'd love to hear if anyone is :> actually getting close to this in production systems and if so what :> your experience has been like. :> : :Do you want to have tenants be able to connect their openstack :networks to another public clouds network using some kind of API? If :so, what are your tenant networks? vlans? vxlan? Yes, I do want to have tenants be able to connect their openstack networks to another public clouds network using some kind of API. Since this is under consideration as part of a new region I haven't implemented anything yet (current region is GRE but willing to cut that off as 'legacy' epecially as we're trying to wind down the DC it lives in). So at this point all possibilites are on the table. My main question is "is anyone actually doing this" with a follow up of "if so how?" Thanks, -Jon :Thanks, :Curtis. : :> -Jon :> :> -- :> :> _______________________________________________ :> OpenStack-operators mailing list :> OpenStack-operators at lists.openstack.org :> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators : : : :-- :Blog: serverascode.com -- From clint at fewbar.com Mon Oct 3 15:45:38 2016 From: clint at fewbar.com (Clint Byrum) Date: Mon, 03 Oct 2016 08:45:38 -0700 Subject: [Openstack-operators] SDN for hybridcloud, does it *really* exist? In-Reply-To: <20161003151603.GC5776@csail.mit.edu> References: <20160930141526.GV5776@csail.mit.edu> <1475357421-sup-138@fewbar.com> <20161003151603.GC5776@csail.mit.edu> Message-ID: <1475509296-sup-8589@fewbar.com> Excerpts from Jonathan Proulx's message of 2016-10-03 11:16:03 -0400: > On Sat, Oct 01, 2016 at 02:39:38PM -0700, Clint Byrum wrote: > > :I know it's hard to believe, but this world was foretold long ago and > :what you want requires no special equipment or changes to OpenStack, > :just will-power. You can achieve it now if you can use operating system > :versions published in the last 5 or so years. > : > :The steps to do this: > : > :1) Fix your apps to work via IPv6 > :2) Fix your internal users to have v6 native > :3) Attach your VMs and containers to a provider network with v6 subnets > :4) Use IPSec and firewalls for critical isolation. (What we use L2 > : separation for now) > > That *is* hard to belive :) IPv6 has been coming soon since I started > in tech a very long time ago ... > > I will consider that but I have a diverse set of users I don't > control. I *may* be able to apply pressure in the if you really need > this then do the right thing, but I probably still want a v4 solution > in my pocket. > Treat v4 as an internet-only, insecure, extra service that one must ask for. It's extremely easy, with OpenStack, to provide both if people want it, and just let them choose. Those who choose v4 only will find they can't do some things, and have a clear incentive to change. It's not that v6 is coming. It's here, knocking on your door. But, like a vampire, you still have to invite it in. From chris at romana.io Mon Oct 3 15:59:07 2016 From: chris at romana.io (Chris Marino) Date: Mon, 3 Oct 2016 08:59:07 -0700 Subject: [Openstack-operators] SDN for hybridcloud, does it *really* exist? In-Reply-To: <20161003152056.GD5776@csail.mit.edu> References: <20160930141526.GV5776@csail.mit.edu> <20161003152056.GD5776@csail.mit.edu> Message-ID: This can also be done with IPv4 address as well. Not quite the flexibility that comes with v6, but workable for all but the very largest environments. This is the approach that is embodied in the Romana (http://romana.io/) project (I am part of this effort). If you run all your OpenStack VMs on a 10/8 provider network, you can carve up the address space for projects, subnets, etc. You can NAT to your existing network, or push host routes to the VMs that need access (a feature that has not yet been implemented). CM ? On Mon, Oct 3, 2016 at 8:20 AM, Jonathan Proulx wrote: > On Sat, Oct 01, 2016 at 11:47:56AM -0600, Curtis wrote: > :On Fri, Sep 30, 2016 at 8:15 AM, Jonathan Proulx > wrote: > :> > :> Starting to think refactoring my SDN world (currently just neutron > :> ml2/ovs inside OpenStack) in preparation for maybe finally lighting up > :> that second Region I've been threatening for the past year... > :> > :> Networking is always the hardest design challeng. Has anyone seen my > :> unicorn? I dream of something the first works with neutron of course > :> but also can extend the same network features to hardware out side > :> openstack and into random public cloud infrastructures through VM and/or > :> containerised gateways. Also I don't want to hire a whole networking > :> team to run it. > :> > :> I'm fairly certain this is still fantasy though I've heard various > :> vendors promise the earth and stars but I'd love to hear if anyone is > :> actually getting close to this in production systems and if so what > :> your experience has been like. > :> > : > :Do you want to have tenants be able to connect their openstack > :networks to another public clouds network using some kind of API? If > :so, what are your tenant networks? vlans? vxlan? > > Yes, I do want to have tenants be able to connect their openstack > networks to another public clouds network using some kind of API. > > Since this is under consideration as part of a new region I haven't > implemented anything yet (current region is GRE but willing to cut > that off as 'legacy' epecially as we're trying to wind down the DC it > lives in). So at this point all possibilites are on the table. > > My main question is "is anyone actually doing this" with a follow up > of "if so how?" > > Thanks, > -Jon > > :Thanks, > :Curtis. > : > :> -Jon > :> > :> -- > :> > :> _______________________________________________ > :> OpenStack-operators mailing list > :> OpenStack-operators at lists.openstack.org > :> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > : > : > : > :-- > :Blog: serverascode.com > > -- > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matt at nycresistor.com Mon Oct 3 16:05:57 2016 From: matt at nycresistor.com (Silence Dogood) Date: Mon, 3 Oct 2016 12:05:57 -0400 Subject: [Openstack-operators] SDN for hybridcloud, does it *really* exist? In-Reply-To: References: <20160930141526.GV5776@csail.mit.edu> <20161003152056.GD5776@csail.mit.edu> Message-ID: I think the best general way to view networking in cloud is WAN vs Cloud Lan. There's almost always an edge routing env for your cloud environments ( whether they be by region or by policy or by tim is an angry dude and you don't touch his instances ). Everything beyond that edge is a WAN problem and can be handled in fairly traditional ways... of course that's over simplifying things like BGP passing into your cloud's lan env. Or multihoming the edge router for that cloud env to multiple networks ( god help you if there is spanning tree involved ). Honestly, I'd still use that demarcation though. It helps split the issue into manageable chunks. -Matt On Mon, Oct 3, 2016 at 11:59 AM, Chris Marino wrote: > This can also be done with IPv4 address as well. Not quite the flexibility > that comes with v6, but workable for all but the very largest environments. > > This is the approach that is embodied in the Romana (http://romana.io/) project > (I am part of this effort). > > If you run all your OpenStack VMs on a 10/8 provider network, you can > carve up the address space for projects, subnets, etc. You can NAT to your > existing network, or push host routes to the VMs that need access (a > feature that has not yet been implemented). > > CM > ? > > On Mon, Oct 3, 2016 at 8:20 AM, Jonathan Proulx wrote: > >> On Sat, Oct 01, 2016 at 11:47:56AM -0600, Curtis wrote: >> :On Fri, Sep 30, 2016 at 8:15 AM, Jonathan Proulx >> wrote: >> :> >> :> Starting to think refactoring my SDN world (currently just neutron >> :> ml2/ovs inside OpenStack) in preparation for maybe finally lighting up >> :> that second Region I've been threatening for the past year... >> :> >> :> Networking is always the hardest design challeng. Has anyone seen my >> :> unicorn? I dream of something the first works with neutron of course >> :> but also can extend the same network features to hardware out side >> :> openstack and into random public cloud infrastructures through VM >> and/or >> :> containerised gateways. Also I don't want to hire a whole networking >> :> team to run it. >> :> >> :> I'm fairly certain this is still fantasy though I've heard various >> :> vendors promise the earth and stars but I'd love to hear if anyone is >> :> actually getting close to this in production systems and if so what >> :> your experience has been like. >> :> >> : >> :Do you want to have tenants be able to connect their openstack >> :networks to another public clouds network using some kind of API? If >> :so, what are your tenant networks? vlans? vxlan? >> >> Yes, I do want to have tenants be able to connect their openstack >> networks to another public clouds network using some kind of API. >> >> Since this is under consideration as part of a new region I haven't >> implemented anything yet (current region is GRE but willing to cut >> that off as 'legacy' epecially as we're trying to wind down the DC it >> lives in). So at this point all possibilites are on the table. >> >> My main question is "is anyone actually doing this" with a follow up >> of "if so how?" >> >> Thanks, >> -Jon >> >> :Thanks, >> :Curtis. >> : >> :> -Jon >> :> >> :> -- >> :> >> :> _______________________________________________ >> :> OpenStack-operators mailing list >> :> OpenStack-operators at lists.openstack.org >> :> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstac >> k-operators >> : >> : >> : >> :-- >> :Blog: serverascode.com >> >> -- >> >> _______________________________________________ >> OpenStack-operators mailing list >> OpenStack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jon at csail.mit.edu Mon Oct 3 17:52:42 2016 From: jon at csail.mit.edu (Jonathan Proulx) Date: Mon, 3 Oct 2016 13:52:42 -0400 Subject: [Openstack-operators] SDN for hybridcloud, does it *really* exist? In-Reply-To: <20160930141526.GV5776@csail.mit.edu> References: <20160930141526.GV5776@csail.mit.edu> Message-ID: <20161003175242.GE5776@csail.mit.edu> So my sense from responses so far: No one is doing unified SDN solutions across clouds and no one really wants to. Consensus is just treat each network island like another remote DC and use normal VPN type stuff to glue them together. ( nod to http://romana.io an interesting looking network and security automation project as a network agnostic alternative to SDN for managing cross cloud policy on whatever networks are available. ) -Jon From matt at nycresistor.com Mon Oct 3 17:59:51 2016 From: matt at nycresistor.com (Silence Dogood) Date: Mon, 3 Oct 2016 13:59:51 -0400 Subject: [Openstack-operators] SDN for hybridcloud, does it *really* exist? In-Reply-To: <20161003175242.GE5776@csail.mit.edu> References: <20160930141526.GV5776@csail.mit.edu> <20161003175242.GE5776@csail.mit.edu> Message-ID: food for nightmares... try to consider how you would handle ip address mapping around a fiber ring between multiple cloud infrastructures. On Mon, Oct 3, 2016 at 1:52 PM, Jonathan Proulx wrote: > > So my sense from responses so far: > > No one is doing unified SDN solutions across clouds and no one really > wants to. > > Consensus is just treat each network island like another remote DC and > use normal VPN type stuff to glue them together. > > ( nod to http://romana.io an interesting looking network and security > automation project as a network agnostic alternative to SDN for > managing cross cloud policy on whatever networks are available. ) > > -Jon > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -------------- next part -------------- An HTML attachment was scrubbed... URL: From clint at fewbar.com Mon Oct 3 18:06:14 2016 From: clint at fewbar.com (Clint Byrum) Date: Mon, 03 Oct 2016 11:06:14 -0700 Subject: [Openstack-operators] SDN for hybridcloud, does it *really* exist? In-Reply-To: <20161003175242.GE5776@csail.mit.edu> References: <20160930141526.GV5776@csail.mit.edu> <20161003175242.GE5776@csail.mit.edu> Message-ID: <1475517939-sup-9599@fewbar.com> Excerpts from Jonathan Proulx's message of 2016-10-03 13:52:42 -0400: > > So my sense from responses so far: > > No one is doing unified SDN solutions across clouds and no one really > wants to. > > Consensus is just treat each network island like another remote DC and > use normal VPN type stuff to glue them together. > > ( nod to http://romana.io an interesting looking network and security > automation project as a network agnostic alternative to SDN for > managing cross cloud policy on whatever networks are available. ) > Oh sorry, there are people taking the complex route to what you want.. sort of: https://wiki.openstack.org/wiki/Tricircle From eyang at technicacorp.com Mon Oct 3 18:45:10 2016 From: eyang at technicacorp.com (Eric Yang) Date: Mon, 3 Oct 2016 18:45:10 +0000 Subject: [Openstack-operators] Kilo to Liberty upgrade Message-ID: Hi all, Can anybody please share your experience with upgrading a Kilo environment to Liberty? More specifically I have a Kilo environment deployed and managed under Fuel 7.0, and I am looking for a path to upgrade it to Liberty. I would certainly like to: - Keep it under management of the Fuel master; - Preserve the previous configuration as much as possible; - Find a way to migrate the instances with minimum to no down-time. I am open to giving up on some of the above criteria if I can upgrade with the least disruption to the instance workloads. I am also using Ceph underneath Cinder/Swift/Glance, and runs Juniper OpenContrail as a Neutron plugin. Thanks in advance for any idea/suggestions, Eric Yang Senior Solutions Architect [cid:image003.jpg at 01D1AB68.B3283440] Technica Corporation 22970 Indian Creek Drive, Suite 500, Dulles, VA 20166 Direct: 703.662.2068 Cell: 703.608.0845 technicacorp.com -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 2226 bytes Desc: image001.jpg URL: From openstack at medberry.net Mon Oct 3 19:04:38 2016 From: openstack at medberry.net (David Medberry) Date: Mon, 3 Oct 2016 13:04:38 -0600 Subject: [Openstack-operators] Kilo to Liberty upgrade In-Reply-To: References: Message-ID: Definitely read (and re-read) the release notes here: https://wiki.openstack.org/wiki/ReleaseNotes/Liberty paying close attention to the Upgrade Notes, API changes, etc. You might also search (google or otherwise) this distribution list for history on this topic as many of us did this q On Mon, Oct 3, 2016 at 12:45 PM, Eric Yang wrote: > Hi all, > > > > Can anybody please share your experience with upgrading a Kilo environment > to Liberty? More specifically I have a Kilo environment deployed and > managed under Fuel 7.0, and I am looking for a path to upgrade it to > Liberty. I would certainly like to: > > - Keep it under management of the Fuel master; > > - Preserve the previous configuration as much as possible; > > - Find a way to migrate the instances with minimum to no > down-time. > > I am open to giving up on some of the above criteria if I can upgrade with > the least disruption to the instance workloads. > > > > I am also using Ceph underneath Cinder/Swift/Glance, and runs Juniper > OpenContrail as a Neutron plugin. > > > > Thanks in advance for any idea/suggestions, > > *Eric Yang* > > *Senior Solutions Architect* > > > > *[image: cid:image003.jpg at 01D1AB68.B3283440]* > > *Technica Corporation* > > 22970 Indian Creek Drive, Suite 500, Dulles, VA 20166 > > *Direct:* 703.662.2068 > > *Cell:* 703.608.0845 > > > > technicacorp.com > > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 2226 bytes Desc: not available URL: From openstack at medberry.net Mon Oct 3 19:06:44 2016 From: openstack at medberry.net (David Medberry) Date: Mon, 3 Oct 2016 13:06:44 -0600 Subject: [Openstack-operators] Kilo to Liberty upgrade In-Reply-To: References: Message-ID: Definitely read (and re-read) the release notes here: https://wiki.openstack.org/wiki/ReleaseNotes/Liberty paying close attention to the Upgrade Notes, API changes, etc. You might also search (google or otherwise) this distribution list for history on this topic as many of us did this quite some time ago. (Sorry hit "send" inadvertently before the note was done.) On Mon, Oct 3, 2016 at 12:45 PM, Eric Yang wrote: > Hi all, > > > > Can anybody please share your experience with upgrading a Kilo environment > to Liberty? More specifically I have a Kilo environment deployed and > managed under Fuel 7.0, and I am looking for a path to upgrade it to > Liberty. I would certainly like to: > > - Keep it under management of the Fuel master; > > - Preserve the previous configuration as much as possible; > > - Find a way to migrate the instances with minimum to no > down-time. > > I am open to giving up on some of the above criteria if I can upgrade with > the least disruption to the instance workloads. > > > > I am also using Ceph underneath Cinder/Swift/Glance, and runs Juniper > OpenContrail as a Neutron plugin. > > > > Thanks in advance for any idea/suggestions, > > *Eric Yang* > > *Senior Solutions Architect* > > > > *[image: cid:image003.jpg at 01D1AB68.B3283440]* > > *Technica Corporation* > > 22970 Indian Creek Drive, Suite 500, Dulles, VA 20166 > > *Direct:* 703.662.2068 > > *Cell:* 703.608.0845 > > > > technicacorp.com > > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 2226 bytes Desc: not available URL: From inc007 at gmail.com Mon Oct 3 20:32:05 2016 From: inc007 at gmail.com (=?UTF-8?B?TWljaGHFgiBKYXN0cnrEmWJza2k=?=) Date: Mon, 3 Oct 2016 15:32:05 -0500 Subject: [Openstack-operators] [kolla] Heka deprecated in Kolla Message-ID: Hello, Kolla deprecates Heka in Ocata cycle, because Mozilla doesn't support this project any more. During Ocata cycle we will prepare migration plan and alternative to fill this missing functionality. As part of N->O cycle we will migrate Heka to alternative we will decide upon in following release. Our goal is to make this migration totally transparent and logging infrastructure (which heka was part of) will keep working in same way on higher levels (elastic, kibana). Regards, Michal From xavpaice at gmail.com Mon Oct 3 21:31:16 2016 From: xavpaice at gmail.com (Xav Paice) Date: Tue, 04 Oct 2016 10:31:16 +1300 Subject: [Openstack-operators] Rados Gateway to Swift migration Message-ID: <1475530276.4764.5.camel@localhost> Hi, We're in the process of migrating our object storage from Rados Gateway to Swift, and I was wondering if anyone has experiences with the data migration steps from one to the other? In particular, we're currently copying each object and container one by one, but the process of inspecting each and every object is incredibly slow. The logic we have is kind of rsync-like, so we can repeat and iterate until it's close, but the final go-live cutover will still take many hours to complete. Ideas on how to get over that would be very much appreciated! Many thanks From serverascode at gmail.com Mon Oct 3 23:29:57 2016 From: serverascode at gmail.com (Curtis) Date: Mon, 3 Oct 2016 17:29:57 -0600 Subject: [Openstack-operators] SDN for hybridcloud, does it *really* exist? In-Reply-To: <20161003175242.GE5776@csail.mit.edu> References: <20160930141526.GV5776@csail.mit.edu> <20161003175242.GE5776@csail.mit.edu> Message-ID: On Mon, Oct 3, 2016 at 11:52 AM, Jonathan Proulx wrote: > > So my sense from responses so far: > > No one is doing unified SDN solutions across clouds and no one really > wants to. I do want to (but am not doing). When I worked at a public cloud based on openstack we certainly wanted this kind of functionality, mostly between openstack regions. Likely so would telecoms, hence the tricircle link that was sent. But I left before we really got into it so I'm not sure what's happened there, or what has or is happening in openstack-land. I would also think it would be great for tenants to be able to setup connections into AWS, and even start up AWS instances via the openstack API. :) Thanks, Curtis. > > Consensus is just treat each network island like another remote DC and > use normal VPN type stuff to glue them together. > > ( nod to http://romana.io an interesting looking network and security > automation project as a network agnostic alternative to SDN for > managing cross cloud policy on whatever networks are available. ) > > -Jon > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators -- Blog: serverascode.com From john.vanommen at gmail.com Mon Oct 3 23:54:18 2016 From: john.vanommen at gmail.com (John van Ommen) Date: Mon, 3 Oct 2016 16:54:18 -0700 Subject: [Openstack-operators] SDN for hybridcloud, does it *really* exist? In-Reply-To: References: <20160930141526.GV5776@csail.mit.edu> <20161003175242.GE5776@csail.mit.edu> Message-ID: In regards to your last comment, that "it would be great for tenants to be able to setup connections into AWS", HP CSA comes close to doing that: http://www8.hp.com/us/en/software-solutions/cloud-service-automation/ It's not *exactly* what you're looking for, you wouldn't be using the OpenStack API. But it's as close as you can get right now. IMHO, HP's OpenStack and it's integration with HP's automation tools has made giant strides in the last year. On Mon, Oct 3, 2016 at 4:29 PM, Curtis wrote: > On Mon, Oct 3, 2016 at 11:52 AM, Jonathan Proulx wrote: >> >> So my sense from responses so far: >> >> No one is doing unified SDN solutions across clouds and no one really >> wants to. > > I do want to (but am not doing). When I worked at a public cloud based > on openstack we certainly wanted this kind of functionality, mostly > between openstack regions. Likely so would telecoms, hence the > tricircle link that was sent. But I left before we really got into it > so I'm not sure what's happened there, or what has or is happening in > openstack-land. > > I would also think it would be great for tenants to be able to setup > connections into AWS, and even start up AWS instances via the > openstack API. :) > > Thanks, > Curtis. > >> >> Consensus is just treat each network island like another remote DC and >> use normal VPN type stuff to glue them together. >> >> ( nod to http://romana.io an interesting looking network and security >> automation project as a network agnostic alternative to SDN for >> managing cross cloud policy on whatever networks are available. ) >> >> -Jon >> >> _______________________________________________ >> OpenStack-operators mailing list >> OpenStack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > > -- > Blog: serverascode.com > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators From stdake at cisco.com Tue Oct 4 00:08:51 2016 From: stdake at cisco.com (Steven Dake (stdake)) Date: Tue, 4 Oct 2016 00:08:51 +0000 Subject: [Openstack-operators] [kolla] Heka deprecated in Kolla In-Reply-To: References: Message-ID: <34200550-E67E-41C2-8DDD-53D0B58A34E4@cisco.com> One small correction. The plan is to deprecate in the Newton cycle (but still keep it in active development until a migration approach is fully fleshed out in Ocata after the 3-month timer expires on the deprecation policy). The rest of Michal?s statement stands on its own. Regards -steve From: Micha? Jastrz?bski Date: Monday, October 3, 2016 at 1:32 PM To: OpenStack Operators Subject: [Openstack-operators] [kolla] Heka deprecated in Kolla Hello, Kolla deprecates Heka in Ocata cycle, because Mozilla doesn't support this project any more. During Ocata cycle we will prepare migration plan and alternative to fill this missing functionality. As part of N->O cycle we will migrate Heka to alternative we will decide upon in following release. Our goal is to make this migration totally transparent and logging infrastructure (which heka was part of) will keep working in same way on higher levels (elastic, kibana). Regards, Michal _______________________________________________ OpenStack-operators mailing list OpenStack-operators at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators -------------- next part -------------- An HTML attachment was scrubbed... URL: From neil at tigera.io Tue Oct 4 11:54:28 2016 From: neil at tigera.io (Neil Jerram) Date: Tue, 04 Oct 2016 11:54:28 +0000 Subject: [Openstack-operators] SDN for hybridcloud, does it *really* exist? In-Reply-To: <20161003175242.GE5776@csail.mit.edu> References: <20160930141526.GV5776@csail.mit.edu> <20161003175242.GE5776@csail.mit.edu> Message-ID: Hi Jonathan, There's also Calico [1,2], which in its simplest form (and as currently implemented): - uses just IP routing (v4 and/or v6)? to connect workloads (VMs / containers / pods / bare metal) - has a security model that works across workloads hosted in different clouds, and so can specify whether and how hybrid cloud workloads should be able to talk to each other (and an agent, Felix, that implements that model). (That does imply a couple of restrictions: that current Calico doesn?t support (1) workloads that genuinely need to be L2-adjacent to each other, and (2) overlapping IPs or "bring your own addressing." We have plans for those if they're really needed, and in the meantime we're seeing plenty of interest in adoption where those points aren't needed, and the simplicity and scalability of Calico's approach are attractive.) One of the reasons for choosing a flat routed IP model was precisely so that workloads just fit into whatever network infrastructure is already there ? and a big driver for that was so that interconnection between ?in cluster? and ?out of cluster? resources would be completely straightforward (not requiring on/off ramps, configuring virtual router ports, mapping between VLANs, etc.) Calico has been separately integrated for some time with OpenStack, Kubernetes and Docker, and there's work underway to demonstrate hybrid cloud combinations of those, I hope in Barcelona. I hope that's of interest; sorry for replying relatively late to this thread. Neil [1] http://docs.openstack.org/developer/networking-calico/ [2] https://www.projectcalico.org/ On Mon, Oct 3, 2016 at 6:54 PM Jonathan Proulx wrote: > > So my sense from responses so far: > > No one is doing unified SDN solutions across clouds and no one really > wants to. > > Consensus is just treat each network island like another remote DC and > use normal VPN type stuff to glue them together. > > ( nod to http://romana.io an interesting looking network and security > automation project as a network agnostic alternative to SDN for > managing cross cloud policy on whatever networks are available. ) > > -Jon > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mvanwink at rackspace.com Tue Oct 4 12:30:16 2016 From: mvanwink at rackspace.com (Matt Van Winkle) Date: Tue, 4 Oct 2016 12:30:16 +0000 Subject: [Openstack-operators] Ops Meetups Team - Meeting Reminder Message-ID: Hey Ops Meetup folks, Gentle reminder that the meeting is in about 1.5 hours at 14:00 UTC. I won?t be able to make it because of an all-day planning meeting. Hopefully, Tom will be around to help manage the chaos. Edgar should be joining to talk about the UC sessions and how we can help with those. Also, we need to further refine the schedule. I?ve made some notes in the ehterpad [1] Thanks! VW [1] https://etherpad.openstack.org/p/ops-meetups-team From serverascode at gmail.com Tue Oct 4 13:18:59 2016 From: serverascode at gmail.com (Curtis) Date: Tue, 4 Oct 2016 07:18:59 -0600 Subject: [Openstack-operators] Rados Gateway to Swift migration In-Reply-To: <1475530276.4764.5.camel@localhost> References: <1475530276.4764.5.camel@localhost> Message-ID: On Mon, Oct 3, 2016 at 3:31 PM, Xav Paice wrote: > Hi, > > We're in the process of migrating our object storage from Rados Gateway > to Swift, and I was wondering if anyone has experiences with the data > migration steps from one to the other? > > In particular, we're currently copying each object and container one by > one, but the process of inspecting each and every object is incredibly > slow. The logic we have is kind of rsync-like, so we can repeat and > iterate until it's close, but the final go-live cutover will still take > many hours to complete. Ideas on how to get over that would be very > much appreciated! Could you load balance to on backend or another based on whether or not the account has been migrated yet? I haven't thought that through completely... Thanks, Curtis. > > Many thanks > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators -- Blog: serverascode.com From dbelova at mirantis.com Tue Oct 4 13:38:53 2016 From: dbelova at mirantis.com (Dina Belova) Date: Tue, 4 Oct 2016 06:38:53 -0700 Subject: [Openstack-operators] [Performance] No Performance Team meeting today Message-ID: Folks, due to the internal training big part of Mirantis won't be able to attend the meeting. Let's skip this for today. Sorry for inconvenience. Cheers, Dina -- *Dina Belova* *Senior Software Engineer* Mirantis, Inc. 525 Almanor Avenue, 4th Floor Sunnyvale, CA 94085 *Phone: 650-772-8418Email: dbelova at mirantis.com * www.mirantis.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From litong01 at us.ibm.com Tue Oct 4 14:25:39 2016 From: litong01 at us.ibm.com (Tong Li) Date: Tue, 4 Oct 2016 10:25:39 -0400 Subject: [Openstack-operators] [interop-challenge]Patch set review for interop-challenge Message-ID: Can someone please review the patch set for the interop challenge effort? Thanks. https://review.openstack.org/#/c/379116/ Tong Li IBM Open Technology Building 501/B205 litong01 at us.ibm.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From edgar.magana at workday.com Tue Oct 4 14:39:59 2016 From: edgar.magana at workday.com (Edgar Magana) Date: Tue, 4 Oct 2016 14:39:59 +0000 Subject: [Openstack-operators] Ops Meetups Team - Meeting Reminder Message-ID: <977F4EFE-1B7F-4233-A7E0-01EFF2F4CB73@workday.com> Hello All, I am trying to join the meeting but I am not sure which IRC channel you use. Edgar On 10/4/16, 5:30 AM, "Matt Van Winkle" wrote: Hey Ops Meetup folks, Gentle reminder that the meeting is in about 1.5 hours at 14:00 UTC. I won?t be able to make it because of an all-day planning meeting. Hopefully, Tom will be around to help manage the chaos. Edgar should be joining to talk about the UC sessions and how we can help with those. Also, we need to further refine the schedule. I?ve made some notes in the ehterpad [1] Thanks! VW [1] https://urldefense.proofpoint.com/v2/url?u=https-3A__etherpad.openstack.org_p_ops-2Dmeetups-2Dteam&d=DQIGaQ&c=DS6PUFBBr_KiLo7Sjt3ljp5jaW5k2i9ijVXllEdOozc&r=G0XRJfDQsuBvqa_wpWyDAUlSpeMV4W1qfWqBfctlWwQ&m=5OOWM-nMlcl9Vc6QqIS3VJhGcj2VdyYOmTqhyMNIK8c&s=SYB5RddFgQ2KUjQ0-ZY3Ajq2uhuWMN6-Nq-ZK3N48Jg&e= _______________________________________________ OpenStack-operators mailing list OpenStack-operators at lists.openstack.org https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DQIGaQ&c=DS6PUFBBr_KiLo7Sjt3ljp5jaW5k2i9ijVXllEdOozc&r=G0XRJfDQsuBvqa_wpWyDAUlSpeMV4W1qfWqBfctlWwQ&m=5OOWM-nMlcl9Vc6QqIS3VJhGcj2VdyYOmTqhyMNIK8c&s=hu8MgvT0ZRfyCbRxSG-uzV10I5fLvMTfrP7YP8Cy6aA&e= From mrhillsman at gmail.com Tue Oct 4 14:41:39 2016 From: mrhillsman at gmail.com (Melvin Hillsman) Date: Tue, 04 Oct 2016 09:41:39 -0500 Subject: [Openstack-operators] Ops Meetups Team - Meeting Reminder In-Reply-To: <977F4EFE-1B7F-4233-A7E0-01EFF2F4CB73@workday.com> References: <977F4EFE-1B7F-4233-A7E0-01EFF2F4CB73@workday.com> Message-ID: <9BAF1AE7-8BE8-458C-B640-539DB1F02764@gmail.com> #openstack-operators -- Melvin Hillsman Ops Technical Lead OpenStack Innovation Center mrhillsman at gmail.com phone: (210) 312-1267 mobile: (210) 413-1659 Learner | Ideation | Belief | Responsibility | Command http://osic.org On 10/4/16, 9:39 AM, "Edgar Magana" wrote: >Hello All, > >I am trying to join the meeting but I am not sure which IRC channel you use. > >Edgar > >On 10/4/16, 5:30 AM, "Matt Van Winkle" wrote: > > Hey Ops Meetup folks, > > Gentle reminder that the meeting is in about 1.5 hours at 14:00 UTC. I won?t be able to make it because of an all-day planning meeting. Hopefully, Tom will be around to help manage the chaos. Edgar should be joining to talk about the UC sessions and how we can help with those. Also, we need to further refine the schedule. I?ve made some notes in the ehterpad [1] > > > > Thanks! > > VW > > > > [1] https://urldefense.proofpoint.com/v2/url?u=https-3A__etherpad.openstack.org_p_ops-2Dmeetups-2Dteam&d=DQIGaQ&c=DS6PUFBBr_KiLo7Sjt3ljp5jaW5k2i9ijVXllEdOozc&r=G0XRJfDQsuBvqa_wpWyDAUlSpeMV4W1qfWqBfctlWwQ&m=5OOWM-nMlcl9Vc6QqIS3VJhGcj2VdyYOmTqhyMNIK8c&s=SYB5RddFgQ2KUjQ0-ZY3Ajq2uhuWMN6-Nq-ZK3N48Jg&e= > > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DQIGaQ&c=DS6PUFBBr_KiLo7Sjt3ljp5jaW5k2i9ijVXllEdOozc&r=G0XRJfDQsuBvqa_wpWyDAUlSpeMV4W1qfWqBfctlWwQ&m=5OOWM-nMlcl9Vc6QqIS3VJhGcj2VdyYOmTqhyMNIK8c&s=hu8MgvT0ZRfyCbRxSG-uzV10I5fLvMTfrP7YP8Cy6aA&e= > > >_______________________________________________ >OpenStack-operators mailing list >OpenStack-operators at lists.openstack.org >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4281 bytes Desc: not available URL: From edgar.magana at workday.com Tue Oct 4 14:42:25 2016 From: edgar.magana at workday.com (Edgar Magana) Date: Tue, 4 Oct 2016 14:42:25 +0000 Subject: [Openstack-operators] Ops Meetups Team - Meeting Reminder In-Reply-To: <9BAF1AE7-8BE8-458C-B640-539DB1F02764@gmail.com> References: <977F4EFE-1B7F-4233-A7E0-01EFF2F4CB73@workday.com> <9BAF1AE7-8BE8-458C-B640-539DB1F02764@gmail.com> Message-ID: <0836D73B-F9E6-4145-8E46-FCA4F9931160@workday.com> Thanks! Edgar On 10/4/16, 7:41 AM, "Melvin Hillsman" wrote: #openstack-operators -- Melvin Hillsman Ops Technical Lead OpenStack Innovation Center mrhillsman at gmail.com phone: (210) 312-1267 mobile: (210) 413-1659 Learner | Ideation | Belief | Responsibility | Command http://osic.org On 10/4/16, 9:39 AM, "Edgar Magana" wrote: >Hello All, > >I am trying to join the meeting but I am not sure which IRC channel you use. > >Edgar > >On 10/4/16, 5:30 AM, "Matt Van Winkle" wrote: > > Hey Ops Meetup folks, > > Gentle reminder that the meeting is in about 1.5 hours at 14:00 UTC. I won?t be able to make it because of an all-day planning meeting. Hopefully, Tom will be around to help manage the chaos. Edgar should be joining to talk about the UC sessions and how we can help with those. Also, we need to further refine the schedule. I?ve made some notes in the ehterpad [1] > > > > Thanks! > > VW > > > > [1] https://urldefense.proofpoint.com/v2/url?u=https-3A__etherpad.openstack.org_p_ops-2Dmeetups-2Dteam&d=DQIGaQ&c=DS6PUFBBr_KiLo7Sjt3ljp5jaW5k2i9ijVXllEdOozc&r=G0XRJfDQsuBvqa_wpWyDAUlSpeMV4W1qfWqBfctlWwQ&m=5OOWM-nMlcl9Vc6QqIS3VJhGcj2VdyYOmTqhyMNIK8c&s=SYB5RddFgQ2KUjQ0-ZY3Ajq2uhuWMN6-Nq-ZK3N48Jg&e= > > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DQIGaQ&c=DS6PUFBBr_KiLo7Sjt3ljp5jaW5k2i9ijVXllEdOozc&r=G0XRJfDQsuBvqa_wpWyDAUlSpeMV4W1qfWqBfctlWwQ&m=5OOWM-nMlcl9Vc6QqIS3VJhGcj2VdyYOmTqhyMNIK8c&s=hu8MgvT0ZRfyCbRxSG-uzV10I5fLvMTfrP7YP8Cy6aA&e= > > >_______________________________________________ >OpenStack-operators mailing list >OpenStack-operators at lists.openstack.org >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators From zioproto at gmail.com Tue Oct 4 14:45:30 2016 From: zioproto at gmail.com (Saverio Proto) Date: Tue, 4 Oct 2016 16:45:30 +0200 Subject: [Openstack-operators] Ops Meetups Team - Meeting Reminder In-Reply-To: <9BAF1AE7-8BE8-458C-B640-539DB1F02764@gmail.com> References: <977F4EFE-1B7F-4233-A7E0-01EFF2F4CB73@workday.com> <9BAF1AE7-8BE8-458C-B640-539DB1F02764@gmail.com> Message-ID: sorry I will not make it today saverio Il 04 ott 2016 4:42 PM, "Melvin Hillsman" ha scritto: > #openstack-operators > > -- > Melvin Hillsman > Ops Technical Lead > OpenStack Innovation Center > mrhillsman at gmail.com > phone: (210) 312-1267 > mobile: (210) 413-1659 > Learner | Ideation | Belief | Responsibility | Command > http://osic.org > > > > > > > > > On 10/4/16, 9:39 AM, "Edgar Magana" wrote: > > >Hello All, > > > >I am trying to join the meeting but I am not sure which IRC channel you > use. > > > >Edgar > > > >On 10/4/16, 5:30 AM, "Matt Van Winkle" wrote: > > > > Hey Ops Meetup folks, > > > > Gentle reminder that the meeting is in about 1.5 hours at 14:00 UTC. > I won?t be able to make it because of an all-day planning meeting. > Hopefully, Tom will be around to help manage the chaos. Edgar should be > joining to talk about the UC sessions and how we can help with those. > Also, we need to further refine the schedule. I?ve made some notes in the > ehterpad [1] > > > > > > > > Thanks! > > > > VW > > > > > > > > [1] https://urldefense.proofpoint.com/v2/url?u=https-3A__ > etherpad.openstack.org_p_ops-2Dmeetups-2Dteam&d=DQIGaQ&c=DS6PUFBBr_ > KiLo7Sjt3ljp5jaW5k2i9ijVXllEdOozc&r=G0XRJfDQsuBvqa_ > wpWyDAUlSpeMV4W1qfWqBfctlWwQ&m=5OOWM-nMlcl9Vc6QqIS3VJhGcj2VdyYOmTqh > yMNIK8c&s=SYB5RddFgQ2KUjQ0-ZY3Ajq2uhuWMN6-Nq-ZK3N48Jg&e= > > > > > > > > _______________________________________________ > > OpenStack-operators mailing list > > OpenStack-operators at lists.openstack.org > > https://urldefense.proofpoint.com/v2/url?u=http-3A__lists. > openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DQIGaQ&c= > DS6PUFBBr_KiLo7Sjt3ljp5jaW5k2i9ijVXllEdOozc&r=G0XRJfDQsuBvqa_ > wpWyDAUlSpeMV4W1qfWqBfctlWwQ&m=5OOWM-nMlcl9Vc6QqIS3VJhGcj2VdyYOmTqh > yMNIK8c&s=hu8MgvT0ZRfyCbRxSG-uzV10I5fLvMTfrP7YP8Cy6aA&e= > > > > > >_______________________________________________ > >OpenStack-operators mailing list > >OpenStack-operators at lists.openstack.org > >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From semoac at gmail.com Tue Oct 4 16:39:01 2016 From: semoac at gmail.com (=?UTF-8?Q?Sergio_Morales_Acu=C3=B1a?=) Date: Tue, 04 Oct 2016 16:39:01 +0000 Subject: [Openstack-operators] [mitaka] Nova, Neutron and dns_name Message-ID: Hi. I'm facing a problem related to dns_name and dnsmasq. Nova and Neutron can create a port with dns_name and dnsmasq it's correctly updating "/addn_host". The problem is when Nova boot an instance using a new or previusly created port. The port has the correct dns_name but dnsmaq (dhcp_agent) it's using the generic (ex. host-10-0-0-16) names. If I restart dhcp_agent or do a port-update on the port, the correct name is added to addn_host. Any ideas? -------------- next part -------------- An HTML attachment was scrubbed... URL: From stig.openstack at telfer.org Tue Oct 4 18:38:14 2016 From: stig.openstack at telfer.org (Stig Telfer) Date: Tue, 4 Oct 2016 19:38:14 +0100 Subject: [Openstack-operators] [scientific][scientific-wg] Reminder: Scientific WG meeting Tuesday 2100 UTC Message-ID: Greetings all - We have a Scientific WG IRC meeting on Tuesday at 2100 UTC on channel #openstack-meeting. The agenda is available here[1] and full IRC meeting details are here[2]. This week Blair has findings from some useful experiments in hypervisor tuning. Plus we have a new space (and new time) for our sessions at Barcelona. Plus some picks from the schedule at SC. If anyone would like to add an item for discussion on the agenda, it is also available in an etherpad[3]. Best wishes, Stig [1] https://wiki.openstack.org/wiki/Scientific_working_group#IRC_Meeting_October_4th_2016 [2] http://eavesdrop.openstack.org/#Scientific_Working_Group [3] https://etherpad.openstack.org/p/Scientific-WG-next-meeting-agenda From lebre.adrien at free.fr Tue Oct 4 21:38:08 2016 From: lebre.adrien at free.fr (lebre.adrien at free.fr) Date: Tue, 4 Oct 2016 23:38:08 +0200 (CEST) Subject: [Openstack-operators] [scientific][scientific-wg][Massively Distributed] - Workshop on Openstack-Federated Identity integration In-Reply-To: Message-ID: <1737622709.22347130.1475617088541.JavaMail.root@zimbra29-e5.priv.proxad.net> Hi Saverio, I'm wondering whether you are aware of the massively distributed Working Group [1]. We are currently defining the agenda for our working sessions [2]. According to your email (in particular the use-case you raised), the topic is interesting and can be discussed during one of the scheduled sessions (if you are available obviously ;)) Actually, Renater (the French NREN) is taking part to the Beyond the clouds initiative. This initiative targets the deployment of an OpenStack throughout several points of presence of a network operator (basically one pop = one rack) [3] Several members of this initiative are strongly involved in this new WG. I just sent an email to my colleagues from Renater to see whether they plan to attend the meeting in Rome. I would be pleased to get informations and major results of your discussions. Will you attend the Barcelona summit ? if yes, do you think you can come and present major results of your exchanges in one of the massively distributed WG sessions. Thanks, Adrien [1] https://wiki.openstack.org/wiki/Massively_Distributed_Clouds [2] https://etherpad.openstack.org/p/massively_distribute-barcelona_working_sessions [3] http://beyondtheclouds.github.io ----- Mail original ----- > De: "Saverio Proto" > ?: "stig openstack" > Cc: "OpenStack Operators" > Envoy?: Lundi 26 Septembre 2016 16:10:58 > Objet: [Openstack-operators] [scientific][scientific-wg] - Workshop on Openstack-Federated Identity integration > > Hello operators, > > At GARR in Rome there will be an event shortly before Barcelona about > Openstack and Identity Federation. > > https://eventr.geant.org/events/2527 > > This is a use case that is very important for NREN running public > cloud for Universities, where a Identity Federation is already > deployed for other services. > > At the meetup in Manchester when we talked about Federation we had an > interesting session: > https://etherpad.openstack.org/p/MAN-ops-Keystone-and-Federation > > I tagged the mail with scientific-wg because this looks like a > use-case that is of big interest of the academic institutions. Look > the etherpad of Manchester the section '' > Who do you want to federate with?'' > > Thank you. > > Saverio > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > From xavpaice at gmail.com Tue Oct 4 21:45:34 2016 From: xavpaice at gmail.com (Xav Paice) Date: Wed, 05 Oct 2016 10:45:34 +1300 Subject: [Openstack-operators] Rados Gateway to Swift migration In-Reply-To: References: <1475530276.4764.5.camel@localhost> Message-ID: <1475617534.4764.18.camel@localhost> On Tue, 2016-10-04 at 07:18 -0600, Curtis wrote: > On Mon, Oct 3, 2016 at 3:31 PM, Xav Paice wrote: > > Hi, > > > > We're in the process of migrating our object storage from Rados Gateway > > to Swift, and I was wondering if anyone has experiences with the data > > migration steps from one to the other? > > > > In particular, we're currently copying each object and container one by > > one, but the process of inspecting each and every object is incredibly > > slow. The logic we have is kind of rsync-like, so we can repeat and > > iterate until it's close, but the final go-live cutover will still take > > many hours to complete. Ideas on how to get over that would be very > > much appreciated! > > Could you load balance to on backend or another based on whether or > not the account has been migrated yet? I haven't thought that through > completely... > That would be the preferable situation, migrating one customer at a time, but we couldn't figure out how to achieve that goal without writing some kind of middleware/proxy type of thing, or being clever with haproxy. We may yet have to go down that route, but I'm hoping not! > Thanks, > Curtis. > > > > > Many thanks > > > > > > _______________________________________________ > > OpenStack-operators mailing list > > OpenStack-operators at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > From Kevin.Fox at pnnl.gov Tue Oct 4 21:56:30 2016 From: Kevin.Fox at pnnl.gov (Fox, Kevin M) Date: Tue, 4 Oct 2016 21:56:30 +0000 Subject: [Openstack-operators] Rados Gateway to Swift migration In-Reply-To: <1475617534.4764.18.camel@localhost> References: <1475530276.4764.5.camel@localhost> , <1475617534.4764.18.camel@localhost> Message-ID: <1A3C52DFCD06494D8528644858247BF01BDF59B2@EX10MBOX06.pnnl.gov> OpenStack really needs a way to have a control api for selecting a swift "flavor", and letting you have multiple swift endpoints within, so swift the software, radowgw, and vendor endpoints can all co'exist. Kevin ________________________________________ From: Xav Paice [xavpaice at gmail.com] Sent: Tuesday, October 04, 2016 2:45 PM To: Curtis Cc: OpenStack Operators Subject: Re: [Openstack-operators] Rados Gateway to Swift migration On Tue, 2016-10-04 at 07:18 -0600, Curtis wrote: > On Mon, Oct 3, 2016 at 3:31 PM, Xav Paice wrote: > > Hi, > > > > We're in the process of migrating our object storage from Rados Gateway > > to Swift, and I was wondering if anyone has experiences with the data > > migration steps from one to the other? > > > > In particular, we're currently copying each object and container one by > > one, but the process of inspecting each and every object is incredibly > > slow. The logic we have is kind of rsync-like, so we can repeat and > > iterate until it's close, but the final go-live cutover will still take > > many hours to complete. Ideas on how to get over that would be very > > much appreciated! > > Could you load balance to on backend or another based on whether or > not the account has been migrated yet? I haven't thought that through > completely... > That would be the preferable situation, migrating one customer at a time, but we couldn't figure out how to achieve that goal without writing some kind of middleware/proxy type of thing, or being clever with haproxy. We may yet have to go down that route, but I'm hoping not! > Thanks, > Curtis. > > > > > Many thanks > > > > > > _______________________________________________ > > OpenStack-operators mailing list > > OpenStack-operators at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > _______________________________________________ OpenStack-operators mailing list OpenStack-operators at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators From xavpaice at gmail.com Tue Oct 4 22:02:42 2016 From: xavpaice at gmail.com (Xav Paice) Date: Wed, 05 Oct 2016 11:02:42 +1300 Subject: [Openstack-operators] Rados Gateway to Swift migration In-Reply-To: <1A3C52DFCD06494D8528644858247BF01BDF59B2@EX10MBOX06.pnnl.gov> References: <1475530276.4764.5.camel@localhost> ,<1475617534.4764.18.camel@localhost> <1A3C52DFCD06494D8528644858247BF01BDF59B2@EX10MBOX06.pnnl.gov> Message-ID: <1475618562.4764.20.camel@localhost> On Tue, 2016-10-04 at 21:56 +0000, Fox, Kevin M wrote: > OpenStack really needs a way to have a control api for selecting a swift "flavor", and letting you have multiple swift endpoints within, so swift the software, radowgw, and vendor endpoints can all co'exist. > Just thinking about it, we use haproxy and the url for requests does include the project id, so we might be able to regex the requests and direct to the appropriate backend. It could make for complex support issues in the meantime, but a much smoother migration. > Kevin > ________________________________________ > From: Xav Paice [xavpaice at gmail.com] > Sent: Tuesday, October 04, 2016 2:45 PM > To: Curtis > Cc: OpenStack Operators > Subject: Re: [Openstack-operators] Rados Gateway to Swift migration > > On Tue, 2016-10-04 at 07:18 -0600, Curtis wrote: > > On Mon, Oct 3, 2016 at 3:31 PM, Xav Paice wrote: > > > Hi, > > > > > > We're in the process of migrating our object storage from Rados Gateway > > > to Swift, and I was wondering if anyone has experiences with the data > > > migration steps from one to the other? > > > > > > In particular, we're currently copying each object and container one by > > > one, but the process of inspecting each and every object is incredibly > > > slow. The logic we have is kind of rsync-like, so we can repeat and > > > iterate until it's close, but the final go-live cutover will still take > > > many hours to complete. Ideas on how to get over that would be very > > > much appreciated! > > > > Could you load balance to on backend or another based on whether or > > not the account has been migrated yet? I haven't thought that through > > completely... > > > > That would be the preferable situation, migrating one customer at a > time, but we couldn't figure out how to achieve that goal without > writing some kind of middleware/proxy type of thing, or being clever > with haproxy. We may yet have to go down that route, but I'm hoping > not! > > > > Thanks, > > Curtis. > > > > > > > > Many thanks > > > > > > > > > _______________________________________________ > > > OpenStack-operators mailing list > > > OpenStack-operators at lists.openstack.org > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > > > > > > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators From serverascode at gmail.com Tue Oct 4 23:48:06 2016 From: serverascode at gmail.com (Curtis) Date: Tue, 4 Oct 2016 17:48:06 -0600 Subject: [Openstack-operators] Rados Gateway to Swift migration In-Reply-To: <1475617534.4764.18.camel@localhost> References: <1475530276.4764.5.camel@localhost> <1475617534.4764.18.camel@localhost> Message-ID: On Tue, Oct 4, 2016 at 3:45 PM, Xav Paice wrote: > On Tue, 2016-10-04 at 07:18 -0600, Curtis wrote: >> On Mon, Oct 3, 2016 at 3:31 PM, Xav Paice wrote: >> > Hi, >> > >> > We're in the process of migrating our object storage from Rados Gateway >> > to Swift, and I was wondering if anyone has experiences with the data >> > migration steps from one to the other? >> > >> > In particular, we're currently copying each object and container one by >> > one, but the process of inspecting each and every object is incredibly >> > slow. The logic we have is kind of rsync-like, so we can repeat and >> > iterate until it's close, but the final go-live cutover will still take >> > many hours to complete. Ideas on how to get over that would be very >> > much appreciated! >> >> Could you load balance to on backend or another based on whether or >> not the account has been migrated yet? I haven't thought that through >> completely... >> > > That would be the preferable situation, migrating one customer at a > time, but we couldn't figure out how to achieve that goal without > writing some kind of middleware/proxy type of thing, or being clever > with haproxy. We may yet have to go down that route, but I'm hoping > not! Maybe you have someone on staff who loves writing lua (for haproxy)? :) > > >> Thanks, >> Curtis. >> >> > >> > Many thanks >> > >> > >> > _______________________________________________ >> > OpenStack-operators mailing list >> > OpenStack-operators at lists.openstack.org >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> >> >> > > -- Blog: serverascode.com From xavpaice at gmail.com Wed Oct 5 00:28:04 2016 From: xavpaice at gmail.com (Xav Paice) Date: Wed, 05 Oct 2016 13:28:04 +1300 Subject: [Openstack-operators] Rados Gateway to Swift migration In-Reply-To: References: <1475530276.4764.5.camel@localhost> <1475617534.4764.18.camel@localhost> Message-ID: <1475627284.4764.24.camel@localhost> On Tue, 2016-10-04 at 17:48 -0600, Curtis wrote: > Maybe you have someone on staff who loves writing lua (for haproxy)? :) > Well, maybe not that far, but yeah we're now thinking down that route. If we get there, I'll quickly write something up about it. Many thanks for the suggestion :) From xavpaice at gmail.com Wed Oct 5 02:29:27 2016 From: xavpaice at gmail.com (Xav Paice) Date: Wed, 05 Oct 2016 15:29:27 +1300 Subject: [Openstack-operators] Rados Gateway to Swift migration In-Reply-To: <1475627284.4764.24.camel@localhost> References: <1475530276.4764.5.camel@localhost> <1475617534.4764.18.camel@localhost> <1475627284.4764.24.camel@localhost> Message-ID: <1475634567.4764.32.camel@localhost> On Wed, 2016-10-05 at 13:28 +1300, Xav Paice wrote: > On Tue, 2016-10-04 at 17:48 -0600, Curtis wrote: > > Maybe you have someone on staff who loves writing lua (for haproxy)? :) > > > > Well, maybe not that far, but yeah we're now thinking down that route. > If we get there, I'll quickly write something up about it. Many thanks > for the suggestion :) > OK, that wasn't as hard as I thought it would be. Thanks Curtis! Haproxy config snippet, in case anyone else has the same need (note, not in production, only rudimentary testing, and slightly sanitized for a mailing list): frontend objectstore bind :::8843 ssl crt host.crt.pem ca-file ca.pem no-sslv3 mode http acl rgw_customer path_beg -m sub -f /some/list/of/rgw-folks use_backend rgw00 if rgw_customer default_backend swift-pxy00 backend rgw00 balance roundrobin mode http http-request set-path /%[path,regsub(/v1/AUTH_.{32},swift/v1)] server rgw1 10.0.0.111:8443 check ca-file ca.pem crt host.crt.pem ssl server rgw2 10.0.0.230:8443 check ca-file ca.pem crt host.crt.pem ssl backend swift-pxy00 balance roundrobin mode http option httpchk HEAD /healthcheck HTTP/1.0 option forwardfor option http-server-close timeout http-keep-alive 500 server opxy1 10.0.0.123:8843 check ca-file ca.pem crt host.crt.pem ssl From blair.bethwaite at gmail.com Wed Oct 5 04:42:53 2016 From: blair.bethwaite at gmail.com (Blair Bethwaite) Date: Wed, 5 Oct 2016 15:42:53 +1100 Subject: [Openstack-operators] Rados Gateway to Swift migration In-Reply-To: <1475634567.4764.32.camel@localhost> References: <1475530276.4764.5.camel@localhost> <1475617534.4764.18.camel@localhost> <1475627284.4764.24.camel@localhost> <1475634567.4764.32.camel@localhost> Message-ID: Nice! But I'm curious, why the need to migrate? On 5 October 2016 at 13:29, Xav Paice wrote: > On Wed, 2016-10-05 at 13:28 +1300, Xav Paice wrote: >> On Tue, 2016-10-04 at 17:48 -0600, Curtis wrote: >> > Maybe you have someone on staff who loves writing lua (for haproxy)? :) >> > >> >> Well, maybe not that far, but yeah we're now thinking down that route. >> If we get there, I'll quickly write something up about it. Many thanks >> for the suggestion :) >> > > OK, that wasn't as hard as I thought it would be. Thanks Curtis! > > Haproxy config snippet, in case anyone else has the same need (note, not > in production, only rudimentary testing, and slightly sanitized for a > mailing list): > > frontend objectstore > bind :::8843 ssl crt host.crt.pem ca-file ca.pem no-sslv3 > mode http > acl rgw_customer path_beg -m sub -f /some/list/of/rgw-folks > use_backend rgw00 if rgw_customer > default_backend swift-pxy00 > > backend rgw00 > balance roundrobin > mode http > http-request set-path /%[path,regsub(/v1/AUTH_.{32},swift/v1)] > server rgw1 10.0.0.111:8443 check ca-file ca.pem crt host.crt.pem ssl > server rgw2 10.0.0.230:8443 check ca-file ca.pem crt host.crt.pem ssl > > backend swift-pxy00 > balance roundrobin > mode http > option httpchk HEAD /healthcheck HTTP/1.0 > option forwardfor > option http-server-close > timeout http-keep-alive 500 > server opxy1 10.0.0.123:8843 check ca-file ca.pem crt host.crt.pem ssl > > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators -- Cheers, ~Blairo From blair.bethwaite at gmail.com Wed Oct 5 05:26:30 2016 From: blair.bethwaite at gmail.com (Blair Bethwaite) Date: Wed, 5 Oct 2016 16:26:30 +1100 Subject: [Openstack-operators] Barcelona Ops Meetup Planning In-Reply-To: References: <18C1B8AC-8838-4B68-901C-87CA4F4F521B@rackspace.com> Message-ID: Hi all, I've just had a look at this with a view to adding 10 minutes somewhere on what to do with the hypervisor tuning guide, but I see the free-form notes on the etherpad have been marked as "Old", so figure it's better to discuss here first... Could maybe fit under ops-nova or ops-hardware? Cheers, Blair On 23 September 2016 at 14:53, Tom Fifield wrote: > Bump! Please add your +1s or session suggestions. We're also looking for > lightning talks. > > https://etherpad.openstack.org/p/BCN-ops-meetup > > > On 20/09/16 21:50, Matt Van Winkle wrote: >> >> Hello all, >> >> The etherpad to gather ideas from you all for sessions during the >> Barcelona Ops Meetup. Please take some time to review and add your >> comments here: >> >> >> >> https://etherpad.openstack.org/p/BCN-ops-meetup >> >> >> >> >> >> The meetup planning team will be meeting weekly between now and the >> summit to plan all this. If You are interested in helping with that >> process, you are welcome to join us in #openstack-operators at 14:00 UTC >> on Tuesdays. We are always ready to welcome new help. >> >> >> >> Thanks! >> >> VW >> >> >> >> >> >> >> >> >> >> _______________________________________________ >> OpenStack-operators mailing list >> OpenStack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators -- Cheers, ~Blairo From xavpaice at gmail.com Wed Oct 5 05:57:49 2016 From: xavpaice at gmail.com (Xav Paice) Date: Wed, 05 Oct 2016 18:57:49 +1300 Subject: [Openstack-operators] Rados Gateway to Swift migration In-Reply-To: References: <1475530276.4764.5.camel@localhost> <1475617534.4764.18.camel@localhost> <1475627284.4764.24.camel@localhost> <1475634567.4764.32.camel@localhost> Message-ID: <1475647069.21186.42.camel@localhost> On Wed, 2016-10-05 at 15:42 +1100, Blair Bethwaite wrote: > Nice! But I'm curious, why the need to migrate? Hmm. I want to be diplomatic since both are great for their thing. For us, the main reason was simply that we wanted replication of the object storage between regions (we started the process before that was a feature in RGW), but also being a public cloud we also wanted to be able to bill customers for their usage, and we were finding that incredibly difficult with Rados Gateway in comparison to Swift. That, and we found customers were using RGW as a backup, and that's on the same storage back end as our Cinder and Glance - moving to a different platform makes it separate. There's a few other features in Swift that aren't in RGW, and we have customers asking for them, which really matters a lot to us. There's pros and cons for both, I don't regret us using RGW but it just doesn't suit our needs right now. From blair.bethwaite at gmail.com Wed Oct 5 06:20:59 2016 From: blair.bethwaite at gmail.com (Blair Bethwaite) Date: Wed, 5 Oct 2016 17:20:59 +1100 Subject: [Openstack-operators] Rados Gateway to Swift migration In-Reply-To: <1475647069.21186.42.camel@localhost> References: <1475530276.4764.5.camel@localhost> <1475617534.4764.18.camel@localhost> <1475627284.4764.24.camel@localhost> <1475634567.4764.32.camel@localhost> <1475647069.21186.42.camel@localhost> Message-ID: Totally makes sense. I was mainly curious whether this was simply down to features/middleware or if there were other factors. We've recently been playing with swift-ceph-backend to try and integrate discrete Ceph based object stores seamlessly into a geo-distributed Swift cluster, so that from a user perspective they get a consistent interface and the added Swift middleware goodies. Not yet sure if it's a production-able solution though, the project seems to have gone quiet. On 5 October 2016 at 16:57, Xav Paice wrote: > On Wed, 2016-10-05 at 15:42 +1100, Blair Bethwaite wrote: >> Nice! But I'm curious, why the need to migrate? > > Hmm. I want to be diplomatic since both are great for their thing. > > For us, the main reason was simply that we wanted replication of the > object storage between regions (we started the process before that was a > feature in RGW), but also being a public cloud we also wanted to be able > to bill customers for their usage, and we were finding that incredibly > difficult with Rados Gateway in comparison to Swift. > > That, and we found customers were using RGW as a backup, and that's on > the same storage back end as our Cinder and Glance - moving to a > different platform makes it separate. > > There's a few other features in Swift that aren't in RGW, and we have > customers asking for them, which really matters a lot to us. > > There's pros and cons for both, I don't regret us using RGW but it just > doesn't suit our needs right now. > -- Cheers, ~Blairo From ihrachys at redhat.com Wed Oct 5 07:50:56 2016 From: ihrachys at redhat.com (Ihar Hrachyshka) Date: Wed, 5 Oct 2016 09:50:56 +0200 Subject: [Openstack-operators] [mitaka] Nova, Neutron and dns_name In-Reply-To: References: Message-ID: <3F0F1F20-038E-4E59-AA8F-20467F15D5ED@redhat.com> Sergio Morales Acu?a wrote: > Hi. > > I'm facing a problem related to dns_name and dnsmasq. > > Nova and Neutron can create a port with dns_name and dnsmasq it's > correctly updating "/addn_host". > > The problem is when Nova boot an instance using a new or previusly > created port. The port has the correct dns_name but dnsmaq (dhcp_agent) > it's using the generic (ex. host-10-0-0-16) names. > > If I restart dhcp_agent or do a port-update on the port, the correct name > is added to addn_host. > > Any ideas? > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators Probably a bug in Mitaka where Neutron controller does not notify DHCP agent about an update. I think it should be solved in Newton with https://review.openstack.org/#/c/355117/ and https://review.openstack.org/#/c/375189/ plus other related patches that belong to push-notifications blueprint. Note that those patches are not safe to backport back to Mitaka, so if we get a bug reported against Mitaka for the behaviour you describe, we may need to solve it in some other way. Ihar From zioproto at gmail.com Wed Oct 5 08:22:54 2016 From: zioproto at gmail.com (Saverio Proto) Date: Wed, 5 Oct 2016 10:22:54 +0200 Subject: [Openstack-operators] [scientific][scientific-wg][Massively Distributed] - Workshop on Openstack-Federated Identity integration In-Reply-To: <1737622709.22347130.1475617088541.JavaMail.root@zimbra29-e5.priv.proxad.net> References: <1737622709.22347130.1475617088541.JavaMail.root@zimbra29-e5.priv.proxad.net> Message-ID: Hello Adrien, yes I am aware of the Massively Distributed Working Group. I will be in Barcelona, and thanks for giving me input about WGs that could be interested in the output of this workshop. At the moment I thought of filling a User Story for the Openstack Product Working group. I hope to see your colleagues from Renater in Rome to start an early discussion about this. Mario Reale in Cc: is the organizer of the workshop, make sure he is in the right loop of emails. thanks Saverio 2016-10-04 23:38 GMT+02:00 : > Hi Saverio, > > I'm wondering whether you are aware of the massively distributed Working Group [1]. > We are currently defining the agenda for our working sessions [2]. According to your email (in particular the use-case you raised), the topic is interesting and can be discussed during one of the scheduled sessions (if you are available obviously ;)) > > Actually, Renater (the French NREN) is taking part to the Beyond the clouds initiative. > This initiative targets the deployment of an OpenStack throughout several points of presence of a network operator (basically one pop = one rack) [3] > Several members of this initiative are strongly involved in this new WG. > > I just sent an email to my colleagues from Renater to see whether they plan to attend the meeting in Rome. > I would be pleased to get informations and major results of your discussions. > > Will you attend the Barcelona summit ? if yes, do you think you can come and present major results of your exchanges in one of the massively distributed WG sessions. > > Thanks, > Adrien > > [1] https://wiki.openstack.org/wiki/Massively_Distributed_Clouds > [2] https://etherpad.openstack.org/p/massively_distribute-barcelona_working_sessions > [3] http://beyondtheclouds.github.io > > ----- Mail original ----- >> De: "Saverio Proto" >> ?: "stig openstack" >> Cc: "OpenStack Operators" >> Envoy?: Lundi 26 Septembre 2016 16:10:58 >> Objet: [Openstack-operators] [scientific][scientific-wg] - Workshop on Openstack-Federated Identity integration >> >> Hello operators, >> >> At GARR in Rome there will be an event shortly before Barcelona about >> Openstack and Identity Federation. >> >> https://eventr.geant.org/events/2527 >> >> This is a use case that is very important for NREN running public >> cloud for Universities, where a Identity Federation is already >> deployed for other services. >> >> At the meetup in Manchester when we talked about Federation we had an >> interesting session: >> https://etherpad.openstack.org/p/MAN-ops-Keystone-and-Federation >> >> I tagged the mail with scientific-wg because this looks like a >> use-case that is of big interest of the academic institutions. Look >> the etherpad of Manchester the section '' >> Who do you want to federate with?'' >> >> Thank you. >> >> Saverio >> >> _______________________________________________ >> OpenStack-operators mailing list >> OpenStack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> From lmihaiescu at gmail.com Wed Oct 5 11:19:55 2016 From: lmihaiescu at gmail.com (George Mihaiescu) Date: Wed, 5 Oct 2016 07:19:55 -0400 Subject: [Openstack-operators] Rados Gateway to Swift migration In-Reply-To: <1475647069.21186.42.camel@localhost> References: <1475530276.4764.5.camel@localhost> <1475617534.4764.18.camel@localhost> <1475627284.4764.24.camel@localhost> <1475634567.4764.32.camel@localhost> <1475647069.21186.42.camel@localhost> Message-ID: <7F2BF3CE-9988-4842-A9E4-3EEC7E71FC89@gmail.com> Hi Xav, We are trying to get usage metrics for radosgw as well for internal cost recovery. Can you please share how far you got in that process and what it was missing? Thank you, George > On Oct 5, 2016, at 1:57 AM, Xav Paice wrote: > >> On Wed, 2016-10-05 at 15:42 +1100, Blair Bethwaite wrote: >> Nice! But I'm curious, why the need to migrate? > > Hmm. I want to be diplomatic since both are great for their thing. > > For us, the main reason was simply that we wanted replication of the > object storage between regions (we started the process before that was a > feature in RGW), but also being a public cloud we also wanted to be able > to bill customers for their usage, and we were finding that incredibly > difficult with Rados Gateway in comparison to Swift. > > That, and we found customers were using RGW as a backup, and that's on > the same storage back end as our Cinder and Glance - moving to a > different platform makes it separate. > > There's a few other features in Swift that aren't in RGW, and we have > customers asking for them, which really matters a lot to us. > > There's pros and cons for both, I don't regret us using RGW but it just > doesn't suit our needs right now. > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators From zioproto at gmail.com Wed Oct 5 12:11:19 2016 From: zioproto at gmail.com (Saverio Proto) Date: Wed, 5 Oct 2016 14:11:19 +0200 Subject: [Openstack-operators] Reserve an external network for 1 tenant In-Reply-To: References: Message-ID: > Alternatively, you could drop the 'external' attribute and attach your > instances directly to the provider network (no routers or private networks). I can't. Because in my network design I do not have all the compute nodes on a common L2 segment. I have a l3 fabric between the compute nodes. So I cant just bridge the provider network to the physical interface of any compute node. I need the traffic to get to the network node, and there I can access the provider network. For a complete L2 setup I am investigating the L2-gw plugin Saverio From tbechtold at suse.com Wed Oct 5 12:44:58 2016 From: tbechtold at suse.com (Thomas Bechtold) Date: Wed, 5 Oct 2016 14:44:58 +0200 Subject: [Openstack-operators] Newton packages for openSUSE and SLES available Message-ID: <20161005124458.2y7lh2ws4s4cgec2@basilikum> Hi, Newton packages for openSUSE and SLES are now available at: http://download.opensuse.org/repositories/Cloud:/OpenStack:/Newton/ We currently maintain + test the packages for SLE 12SP2 and openSUSE Leap 42.2. If you find issues, please do not hesitate to report them to opensuse-cloud at opensuse.org or to https://bugzilla.opensuse.org/ Thanks! Have a lot of fun, Tom From serverascode at gmail.com Wed Oct 5 12:51:55 2016 From: serverascode at gmail.com (Curtis) Date: Wed, 5 Oct 2016 06:51:55 -0600 Subject: [Openstack-operators] [telecom-nfv] Meeting #10 - May be slightly delayed Message-ID: Hi All, The telecom/nfv ops functional team has a meeting this morning [1]. I may be a bit late for it, depending on traffic, due to some unforeseen circumstances. I'm not canceling, just may be a few minutes late. That said, our agenda is currently light [2] with only one item today (so far), and that is that the OpenStack User Committee has provided us with a meeting time and room at the summit [3]. Further, the ops summit will likely also provide us a second slot, giving us two for the summit, which is great. The second slot is still being determined in terms of time and location. Thanks, Curtis. [1] http://eavesdrop.openstack.org/#OpenStack_Operators_Telco_and_NFV_Working_Group [2]: https://etherpad.openstack.org/p/ops-telco-nfv-meeting-agenda [3]: https://www.openstack.org/summit/barcelona-2016/summit-schedule/events/16768/openstack-operators-telecomnfv-functional-team From mkassawara at gmail.com Wed Oct 5 12:54:29 2016 From: mkassawara at gmail.com (Matt Kassawara) Date: Wed, 5 Oct 2016 06:54:29 -0600 Subject: [Openstack-operators] Reserve an external network for 1 tenant In-Reply-To: References: Message-ID: In that case, you probably need the RBAC features in Mitaka. On Wed, Oct 5, 2016 at 6:11 AM, Saverio Proto wrote: > > Alternatively, you could drop the 'external' attribute and attach your > > instances directly to the provider network (no routers or private > networks). > > I can't. Because in my network design I do not have all the compute > nodes on a common L2 segment. > I have a l3 fabric between the compute nodes. So I cant just bridge > the provider network to the physical interface of any compute node. > > I need the traffic to get to the network node, and there I can access > the provider network. > > For a complete L2 setup I am investigating the L2-gw plugin > > Saverio > -------------- next part -------------- An HTML attachment was scrubbed... URL: From josephbajin at gmail.com Wed Oct 5 16:00:17 2016 From: josephbajin at gmail.com (Joseph Bajin) Date: Wed, 5 Oct 2016 12:00:17 -0400 Subject: [Openstack-operators] Reserve an external network for 1 tenant In-Reply-To: References: Message-ID: I believe you can actually do this in Liberty.. http://docs.openstack.org/liberty/networking-guide/adv-config-network-rbac.html On Mon, Oct 3, 2016 at 1:00 AM, Kevin Benton wrote: > You will need mitaka to get an external network that is only available to > specific tenants. That is what the 'access_as_external' you identified does. > > Search for the section "Allowing a network to be used as an external > network" in http://docs.openstack.org/mitaka/networking-guide/ > config-rbac.html. > > On Thu, Sep 29, 2016 at 5:01 AM, Saverio Proto wrote: > >> Hello, >> >> Context: >> - openstack liberty >> - ubuntu trusty >> - neutron networking with vxlan tunnels >> >> we have been running Openstack with a single external network so far. >> >> Now we have a specific VLAN in our datacenter with some hardware boxes >> that need a connection to a specific tenant network. >> >> To make this possible I changed the configuration of the network node >> to support multiple external networks. I am able to create a router >> and set as external network the new physnet where the boxes are. >> >> Everything looks nice except that all the projects can benefit from >> this new external network. In any tenant I can create a router, and >> set the external network and connect to the boxes. I cannot restrict >> it to a specific tenant. >> >> I found this piece of documentation: >> >> https://wiki.openstack.org/wiki/Neutron/sharing-model-for- >> external-networks >> >> So it looks like it is impossible to have a flat external network >> reserved for 1 specific tenant. >> >> I also tried to follow this documentation: >> http://docs.openstack.org/liberty/networking-guide/adv-confi >> g-network-rbac.html >> >> But it does not specify if it is possible to specify a policy for an >> external network to limit the sharing. >> >> It did not work for me so I guess this does not work when the secret >> network I want to create is external. >> >> There is an action --action access_as_external that is not clear to me. >> >> Also look like this feature is evolving in Newton: >> http://docs.openstack.org/draft/networking-guide/config-rbac.html >> >> Anyone has tried similar setups ? What is the minimum openstack >> version to get this done ? >> >> thank you >> >> Saverio >> >> _______________________________________________ >> OpenStack-operators mailing list >> OpenStack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From xavpaice at gmail.com Wed Oct 5 19:12:14 2016 From: xavpaice at gmail.com (Xav Paice) Date: Thu, 06 Oct 2016 08:12:14 +1300 Subject: [Openstack-operators] Rados Gateway to Swift migration In-Reply-To: <7F2BF3CE-9988-4842-A9E4-3EEC7E71FC89@gmail.com> References: <1475530276.4764.5.camel@localhost> <1475617534.4764.18.camel@localhost> <1475627284.4764.24.camel@localhost> <1475634567.4764.32.camel@localhost> <1475647069.21186.42.camel@localhost> <7F2BF3CE-9988-4842-A9E4-3EEC7E71FC89@gmail.com> Message-ID: <1475694734.21186.45.camel@localhost> On Wed, 2016-10-05 at 07:19 -0400, George Mihaiescu wrote: > Hi Xav, > > We are trying to get usage metrics for radosgw as well for internal cost recovery. Can you please share how far you got in that process and what it was missing? > To be honest, we didn't get very far and that's one of the reasons for using Swift instead. There were limitations with permissions that we found very difficult to get around, without having a data collection user added to each and every project. > Thank you, > George > > > On Oct 5, 2016, at 1:57 AM, Xav Paice wrote: > > > >> On Wed, 2016-10-05 at 15:42 +1100, Blair Bethwaite wrote: > >> Nice! But I'm curious, why the need to migrate? > > > > Hmm. I want to be diplomatic since both are great for their thing. > > > > For us, the main reason was simply that we wanted replication of the > > object storage between regions (we started the process before that was a > > feature in RGW), but also being a public cloud we also wanted to be able > > to bill customers for their usage, and we were finding that incredibly > > difficult with Rados Gateway in comparison to Swift. > > > > That, and we found customers were using RGW as a backup, and that's on > > the same storage back end as our Cinder and Glance - moving to a > > different platform makes it separate. > > > > There's a few other features in Swift that aren't in RGW, and we have > > customers asking for them, which really matters a lot to us. > > > > There's pros and cons for both, I don't regret us using RGW but it just > > doesn't suit our needs right now. > > > > > > _______________________________________________ > > OpenStack-operators mailing list > > OpenStack-operators at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators From Kevin.Fox at pnnl.gov Wed Oct 5 19:29:35 2016 From: Kevin.Fox at pnnl.gov (Fox, Kevin M) Date: Wed, 5 Oct 2016 19:29:35 +0000 Subject: [Openstack-operators] Rados Gateway to Swift migration In-Reply-To: <1475694734.21186.45.camel@localhost> References: <1475530276.4764.5.camel@localhost> <1475617534.4764.18.camel@localhost> <1475627284.4764.24.camel@localhost> <1475634567.4764.32.camel@localhost> <1475647069.21186.42.camel@localhost> <7F2BF3CE-9988-4842-A9E4-3EEC7E71FC89@gmail.com>, <1475694734.21186.45.camel@localhost> Message-ID: <1A3C52DFCD06494D8528644858247BF01BDF61D4@EX10MBOX06.pnnl.gov> Did you try it with jewel? If not, what version? Thanks, Kevin ________________________________________ From: Xav Paice [xavpaice at gmail.com] Sent: Wednesday, October 05, 2016 12:12 PM To: George Mihaiescu Cc: OpenStack Operators Subject: Re: [Openstack-operators] Rados Gateway to Swift migration On Wed, 2016-10-05 at 07:19 -0400, George Mihaiescu wrote: > Hi Xav, > > We are trying to get usage metrics for radosgw as well for internal cost recovery. Can you please share how far you got in that process and what it was missing? > To be honest, we didn't get very far and that's one of the reasons for using Swift instead. There were limitations with permissions that we found very difficult to get around, without having a data collection user added to each and every project. > Thank you, > George > > > On Oct 5, 2016, at 1:57 AM, Xav Paice wrote: > > > >> On Wed, 2016-10-05 at 15:42 +1100, Blair Bethwaite wrote: > >> Nice! But I'm curious, why the need to migrate? > > > > Hmm. I want to be diplomatic since both are great for their thing. > > > > For us, the main reason was simply that we wanted replication of the > > object storage between regions (we started the process before that was a > > feature in RGW), but also being a public cloud we also wanted to be able > > to bill customers for their usage, and we were finding that incredibly > > difficult with Rados Gateway in comparison to Swift. > > > > That, and we found customers were using RGW as a backup, and that's on > > the same storage back end as our Cinder and Glance - moving to a > > different platform makes it separate. > > > > There's a few other features in Swift that aren't in RGW, and we have > > customers asking for them, which really matters a lot to us. > > > > There's pros and cons for both, I don't regret us using RGW but it just > > doesn't suit our needs right now. > > > > > > _______________________________________________ > > OpenStack-operators mailing list > > OpenStack-operators at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators _______________________________________________ OpenStack-operators mailing list OpenStack-operators at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators From xavpaice at gmail.com Wed Oct 5 19:39:12 2016 From: xavpaice at gmail.com (Xav Paice) Date: Thu, 06 Oct 2016 08:39:12 +1300 Subject: [Openstack-operators] Rados Gateway to Swift migration In-Reply-To: <1A3C52DFCD06494D8528644858247BF01BDF61D4@EX10MBOX06.pnnl.gov> References: <1475530276.4764.5.camel@localhost> <1475617534.4764.18.camel@localhost> <1475627284.4764.24.camel@localhost> <1475634567.4764.32.camel@localhost> <1475647069.21186.42.camel@localhost> <7F2BF3CE-9988-4842-A9E4-3EEC7E71FC89@gmail.com> ,<1475694734.21186.45.camel@localhost> <1A3C52DFCD06494D8528644858247BF01BDF61D4@EX10MBOX06.pnnl.gov> Message-ID: <1475696352.21186.48.camel@localhost> On Wed, 2016-10-05 at 19:29 +0000, Fox, Kevin M wrote: > Did you try it with jewel? If not, what version? > >From Emperor up to Hammer, haven't upgraded since then. I see that there's a number of significant changes, some of the limitations we were finding to be a problem may be gone now. > Thanks, > Kevin > ________________________________________ > From: Xav Paice [xavpaice at gmail.com] > Sent: Wednesday, October 05, 2016 12:12 PM > To: George Mihaiescu > Cc: OpenStack Operators > Subject: Re: [Openstack-operators] Rados Gateway to Swift migration > > On Wed, 2016-10-05 at 07:19 -0400, George Mihaiescu wrote: > > Hi Xav, > > > > We are trying to get usage metrics for radosgw as well for internal cost recovery. Can you please share how far you got in that process and what it was missing? > > > > To be honest, we didn't get very far and that's one of the reasons for > using Swift instead. There were limitations with permissions that we > found very difficult to get around, without having a data collection > user added to each and every project. > > > Thank you, > > George > > > > > On Oct 5, 2016, at 1:57 AM, Xav Paice wrote: > > > > > >> On Wed, 2016-10-05 at 15:42 +1100, Blair Bethwaite wrote: > > >> Nice! But I'm curious, why the need to migrate? > > > > > > Hmm. I want to be diplomatic since both are great for their thing. > > > > > > For us, the main reason was simply that we wanted replication of the > > > object storage between regions (we started the process before that was a > > > feature in RGW), but also being a public cloud we also wanted to be able > > > to bill customers for their usage, and we were finding that incredibly > > > difficult with Rados Gateway in comparison to Swift. > > > > > > That, and we found customers were using RGW as a backup, and that's on > > > the same storage back end as our Cinder and Glance - moving to a > > > different platform makes it separate. > > > > > > There's a few other features in Swift that aren't in RGW, and we have > > > customers asking for them, which really matters a lot to us. > > > > > > There's pros and cons for both, I don't regret us using RGW but it just > > > doesn't suit our needs right now. > > > > > > > > > _______________________________________________ > > > OpenStack-operators mailing list > > > OpenStack-operators at lists.openstack.org > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators From Kevin.Fox at pnnl.gov Wed Oct 5 19:47:01 2016 From: Kevin.Fox at pnnl.gov (Fox, Kevin M) Date: Wed, 5 Oct 2016 19:47:01 +0000 Subject: [Openstack-operators] Rados Gateway to Swift migration In-Reply-To: <1475696352.21186.48.camel@localhost> References: <1475530276.4764.5.camel@localhost> <1475617534.4764.18.camel@localhost> <1475627284.4764.24.camel@localhost> <1475634567.4764.32.camel@localhost> <1475647069.21186.42.camel@localhost> <7F2BF3CE-9988-4842-A9E4-3EEC7E71FC89@gmail.com> ,<1475694734.21186.45.camel@localhost> <1A3C52DFCD06494D8528644858247BF01BDF61D4@EX10MBOX06.pnnl.gov>, <1475696352.21186.48.camel@localhost> Message-ID: <1A3C52DFCD06494D8528644858247BF01BDF620A@EX10MBOX06.pnnl.gov> ah. ok. we were going to start looking into getting some metrics too, but hadn't yet. We're infernalis now, and going to jewel soon. Theres a huge difference between jewel and hammer. So maybe its better now... The whole single namespace for all tenants thing is supposed to be fixed in jewel too. which has bitten us on multiple occasions. Thanks, Kevin ________________________________________ From: Xav Paice [xavpaice at gmail.com] Sent: Wednesday, October 05, 2016 12:39 PM To: Fox, Kevin M Cc: George Mihaiescu; OpenStack Operators Subject: Re: [Openstack-operators] Rados Gateway to Swift migration On Wed, 2016-10-05 at 19:29 +0000, Fox, Kevin M wrote: > Did you try it with jewel? If not, what version? > >From Emperor up to Hammer, haven't upgraded since then. I see that there's a number of significant changes, some of the limitations we were finding to be a problem may be gone now. > Thanks, > Kevin > ________________________________________ > From: Xav Paice [xavpaice at gmail.com] > Sent: Wednesday, October 05, 2016 12:12 PM > To: George Mihaiescu > Cc: OpenStack Operators > Subject: Re: [Openstack-operators] Rados Gateway to Swift migration > > On Wed, 2016-10-05 at 07:19 -0400, George Mihaiescu wrote: > > Hi Xav, > > > > We are trying to get usage metrics for radosgw as well for internal cost recovery. Can you please share how far you got in that process and what it was missing? > > > > To be honest, we didn't get very far and that's one of the reasons for > using Swift instead. There were limitations with permissions that we > found very difficult to get around, without having a data collection > user added to each and every project. > > > Thank you, > > George > > > > > On Oct 5, 2016, at 1:57 AM, Xav Paice wrote: > > > > > >> On Wed, 2016-10-05 at 15:42 +1100, Blair Bethwaite wrote: > > >> Nice! But I'm curious, why the need to migrate? > > > > > > Hmm. I want to be diplomatic since both are great for their thing. > > > > > > For us, the main reason was simply that we wanted replication of the > > > object storage between regions (we started the process before that was a > > > feature in RGW), but also being a public cloud we also wanted to be able > > > to bill customers for their usage, and we were finding that incredibly > > > difficult with Rados Gateway in comparison to Swift. > > > > > > That, and we found customers were using RGW as a backup, and that's on > > > the same storage back end as our Cinder and Glance - moving to a > > > different platform makes it separate. > > > > > > There's a few other features in Swift that aren't in RGW, and we have > > > customers asking for them, which really matters a lot to us. > > > > > > There's pros and cons for both, I don't regret us using RGW but it just > > > doesn't suit our needs right now. > > > > > > > > > _______________________________________________ > > > OpenStack-operators mailing list > > > OpenStack-operators at lists.openstack.org > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators From mihailmed at gmail.com Wed Oct 5 20:11:38 2016 From: mihailmed at gmail.com (Mikhail Medvedev) Date: Thu, 6 Oct 2016 01:11:38 +0500 Subject: [Openstack-operators] [Nova][Scheduler] How to filter and pick Host with least number of instances. In-Reply-To: References: Message-ID: Hi Karan, On Sep 22, 2016 19:19, "Karan" wrote: > > Hi > > Is it possible to configure openstack scehduler to schedule instances > to a host with least number of instances running on it? > When multiple hosts are eligible to spawn a new instance, scheduler > applies weight multipliers to available RAM and CPU and pick one host. > Is there a way to ask scheduler to pick a Host with least number y of > instances on it. > Yes, there is a way to select a host with the least number of instances. It can be done by writing a custom weighter that returns negated number of instances as host weight. I wrote an implementation that has been used for a while in our test cloud, but I am not going to be able to share it until next week. Let me know if you still need it by then. > > Thanks > Karan > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators -- Mikhail Medvedev (mmedvede) IBM, OpenStack CI for KVM on Power -------------- next part -------------- An HTML attachment was scrubbed... URL: From julien at danjou.info Wed Oct 5 20:43:10 2016 From: julien at danjou.info (Julien Danjou) Date: Wed, 05 Oct 2016 22:43:10 +0200 Subject: [Openstack-operators] [openstack-dev] [telemetry] Deprecating the Ceilometer API Message-ID: There's a thread running about deprecating the Ceilometer API that might be of some interest to operators, see: http://lists.openstack.org/pipermail/openstack-dev/2016-October/105042.html -------------- next part -------------- An embedded message was scrubbed... From: Julien Danjou Subject: [openstack-dev] [telemetry] Deprecating the Ceilometer API Date: Tue, 04 Oct 2016 17:27:35 +0200 Size: 5209 URL: -------------- next part -------------- -- Julien Danjou # Free Software hacker # https://julien.danjou.info -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 800 bytes Desc: not available URL: From dgvigi at sandia.gov Wed Oct 5 21:16:31 2016 From: dgvigi at sandia.gov (Vigil, David Gabriel) Date: Wed, 5 Oct 2016 21:16:31 +0000 Subject: [Openstack-operators] Tenant/Project naming restrictions Message-ID: <8ff8f0dcbe1049189b30acb4c525ca5a@ES01AMSNLNT.srn.sandia.gov> What, if any, are the official tenant/project naming requirements/restrictions? I can't find any documentation that speaks to any limitations. Is this documented somewhere? Dave G Vigil Sr Systems Integration Analyst Sr/SAIC Lead 09321 Common Engineering Environment dgvigi at sandia.gov 505-284-0157 (office) SAIC -------------- next part -------------- An HTML attachment was scrubbed... URL: From semoac at gmail.com Wed Oct 5 21:28:38 2016 From: semoac at gmail.com (=?UTF-8?Q?Sergio_Morales_Acu=C3=B1a?=) Date: Wed, 05 Oct 2016 21:28:38 +0000 Subject: [Openstack-operators] [mitaka] Nova, Neutron and dns_name In-Reply-To: <3F0F1F20-038E-4E59-AA8F-20467F15D5ED@redhat.com> References: <3F0F1F20-038E-4E59-AA8F-20467F15D5ED@redhat.com> Message-ID: Thanks for the heads up. El mi?., 5 oct. 2016 a las 4:50, Ihar Hrachyshka () escribi?: > Sergio Morales Acu?a wrote: > > > Hi. > > > > I'm facing a problem related to dns_name and dnsmasq. > > > > Nova and Neutron can create a port with dns_name and dnsmasq it's > > correctly updating "/addn_host". > > > > The problem is when Nova boot an instance using a new or previusly > > created port. The port has the correct dns_name but dnsmaq (dhcp_agent) > > it's using the generic (ex. host-10-0-0-16) names. > > > > If I restart dhcp_agent or do a port-update on the port, the correct name > > is added to addn_host. > > > > Any ideas? > > _______________________________________________ > > OpenStack-operators mailing list > > OpenStack-operators at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > Probably a bug in Mitaka where Neutron controller does not notify DHCP > agent about an update. I think it should be solved in Newton with > https://review.openstack.org/#/c/355117/ and > https://review.openstack.org/#/c/375189/ plus other related patches that > belong to push-notifications blueprint. > > Note that those patches are not safe to backport back to Mitaka, so if we > get a bug reported against Mitaka for the behaviour you describe, we may > need to solve it in some other way. > > Ihar > -------------- next part -------------- An HTML attachment was scrubbed... URL: From s.martinelli at gmail.com Wed Oct 5 21:36:11 2016 From: s.martinelli at gmail.com (Steve Martinelli) Date: Wed, 5 Oct 2016 17:36:11 -0400 Subject: [Openstack-operators] Tenant/Project naming restrictions In-Reply-To: <8ff8f0dcbe1049189b30acb4c525ca5a@ES01AMSNLNT.srn.sandia.gov> References: <8ff8f0dcbe1049189b30acb4c525ca5a@ES01AMSNLNT.srn.sandia.gov> Message-ID: There are some restrictions. 1. The project name cannot be longer than 64 characters. 2. Within a domain, the project name is unique. So you can have project "foo" in the "default" domain, and in any other domain. On Wed, Oct 5, 2016 at 5:16 PM, Vigil, David Gabriel wrote: > What, if any, are the official tenant/project naming > requirements/restrictions? I can?t find any documentation that speaks to > any limitations. Is this documented somewhere? > > > > > > > > Dave G Vigil Sr > > Systems Integration Analyst Sr/SAIC Lead 09321 > > Common Engineering Environment > > dgvigi at sandia.gov > > 505-284-0157 (office) > > SAIC > > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matt at nycresistor.com Wed Oct 5 21:41:18 2016 From: matt at nycresistor.com (Silence Dogood) Date: Wed, 5 Oct 2016 17:41:18 -0400 Subject: [Openstack-operators] Tenant/Project naming restrictions In-Reply-To: References: <8ff8f0dcbe1049189b30acb4c525ca5a@ES01AMSNLNT.srn.sandia.gov> Message-ID: so project '? ' would be perfectly okay then. On Wed, Oct 5, 2016 at 5:36 PM, Steve Martinelli wrote: > There are some restrictions. > > 1. The project name cannot be longer than 64 characters. > 2. Within a domain, the project name is unique. So you can have project > "foo" in the "default" domain, and in any other domain. > > On Wed, Oct 5, 2016 at 5:16 PM, Vigil, David Gabriel > wrote: > >> What, if any, are the official tenant/project naming >> requirements/restrictions? I can?t find any documentation that speaks to >> any limitations. Is this documented somewhere? >> >> >> >> >> >> >> >> Dave G Vigil Sr >> >> Systems Integration Analyst Sr/SAIC Lead 09321 >> >> Common Engineering Environment >> >> dgvigi at sandia.gov >> >> 505-284-0157 (office) >> >> SAIC >> >> >> >> _______________________________________________ >> OpenStack-operators mailing list >> OpenStack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> >> > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kevin at benton.pub Wed Oct 5 22:05:24 2016 From: kevin at benton.pub (Kevin Benton) Date: Wed, 5 Oct 2016 15:05:24 -0700 Subject: [Openstack-operators] [mitaka] Nova, Neutron and dns_name In-Reply-To: <3F0F1F20-038E-4E59-AA8F-20467F15D5ED@redhat.com> References: <3F0F1F20-038E-4E59-AA8F-20467F15D5ED@redhat.com> Message-ID: What's strange is that it's happening with a previously created port, which the dhcp agent should already know about even without an update. @Sergio, Can you open a bug for this and provide exact steps to reproduce this? Thanks, Kevin Benton On Wed, Oct 5, 2016 at 12:50 AM, Ihar Hrachyshka wrote: > Sergio Morales Acu?a wrote: > > Hi. >> >> I'm facing a problem related to dns_name and dnsmasq. >> >> Nova and Neutron can create a port with dns_name and dnsmasq it's >> correctly updating "/addn_host". >> >> The problem is when Nova boot an instance using a new or previusly >> created port. The port has the correct dns_name but dnsmaq (dhcp_agent) >> it's using the generic (ex. host-10-0-0-16) names. >> >> If I restart dhcp_agent or do a port-update on the port, the correct name >> is added to addn_host. >> >> Any ideas? >> _______________________________________________ >> OpenStack-operators mailing list >> OpenStack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> > > Probably a bug in Mitaka where Neutron controller does not notify DHCP > agent about an update. I think it should be solved in Newton with > https://review.openstack.org/#/c/355117/ and > https://review.openstack.org/#/c/375189/ plus other related patches that > belong to push-notifications blueprint. > > Note that those patches are not safe to backport back to Mitaka, so if we > get a bug reported against Mitaka for the behaviour you describe, we may > need to solve it in some other way. > > Ihar > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -------------- next part -------------- An HTML attachment was scrubbed... URL: From semoac at gmail.com Wed Oct 5 22:41:01 2016 From: semoac at gmail.com (=?UTF-8?Q?Sergio_Morales_Acu=C3=B1a?=) Date: Wed, 05 Oct 2016 22:41:01 +0000 Subject: [Openstack-operators] [mitaka] Nova, Neutron and dns_name In-Reply-To: References: <3F0F1F20-038E-4E59-AA8F-20467F15D5ED@redhat.com> Message-ID: @Kevin, here is the bug and the extra information: https://bugs.launchpad.net/neutron/+bug/1630603/ Thanks, El mi?., 5 oct. 2016 a las 19:05, Kevin Benton () escribi?: > What's strange is that it's happening with a previously created port, > which the dhcp agent should already know about even without an update. > > @Sergio, > > Can you open a bug for this and provide exact steps to reproduce this? > > > Thanks, > Kevin Benton > > On Wed, Oct 5, 2016 at 12:50 AM, Ihar Hrachyshka > wrote: > > Sergio Morales Acu?a wrote: > > Hi. > > I'm facing a problem related to dns_name and dnsmasq. > > Nova and Neutron can create a port with dns_name and dnsmasq it's > correctly updating "/addn_host". > > The problem is when Nova boot an instance using a new or previusly created > port. The port has the correct dns_name but dnsmaq (dhcp_agent) it's using > the generic (ex. host-10-0-0-16) names. > > If I restart dhcp_agent or do a port-update on the port, the correct name > is added to addn_host. > > Any ideas? > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > Probably a bug in Mitaka where Neutron controller does not notify DHCP > agent about an update. I think it should be solved in Newton with > https://review.openstack.org/#/c/355117/ and > https://review.openstack.org/#/c/375189/ plus other related patches that > belong to push-notifications blueprint. > > Note that those patches are not safe to backport back to Mitaka, so if we > get a bug reported against Mitaka for the behaviour you describe, we may > need to solve it in some other way. > > Ihar > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From blair.bethwaite at gmail.com Wed Oct 5 23:00:07 2016 From: blair.bethwaite at gmail.com (Blair Bethwaite) Date: Thu, 6 Oct 2016 10:00:07 +1100 Subject: [Openstack-operators] Barcelona Ops Meetup Planning In-Reply-To: References: <18C1B8AC-8838-4B68-901C-87CA4F4F521B@rackspace.com> Message-ID: Hi David, Good point, hadn't noticed the docs ones. I'll keep an eye on how the schedule pans out versus where I can be and slot something into relevant etherpads closer to the time. Cheers, Blair On 5 October 2016 at 22:23, David Medberry wrote: > There is a docs session or two. Also I think it could be discussed in the > Nova section. As a stretch, we could cover on lightning talks. There is also > the Friday work sessions. So I think plenty of options. We also have at > least three placeholder sessions. > > > On Oct 4, 2016 11:28 PM, "Blair Bethwaite" > wrote: >> >> Hi all, >> >> I've just had a look at this with a view to adding 10 minutes >> somewhere on what to do with the hypervisor tuning guide, but I see >> the free-form notes on the etherpad have been marked as "Old", so >> figure it's better to discuss here first... Could maybe fit under >> ops-nova or ops-hardware? >> >> Cheers, >> Blair >> >> On 23 September 2016 at 14:53, Tom Fifield wrote: >> > Bump! Please add your +1s or session suggestions. We're also looking for >> > lightning talks. >> > >> > https://etherpad.openstack.org/p/BCN-ops-meetup >> > >> > >> > On 20/09/16 21:50, Matt Van Winkle wrote: >> >> >> >> Hello all, >> >> >> >> The etherpad to gather ideas from you all for sessions during the >> >> Barcelona Ops Meetup. Please take some time to review and add your >> >> comments here: >> >> >> >> >> >> >> >> https://etherpad.openstack.org/p/BCN-ops-meetup >> >> >> >> >> >> >> >> >> >> >> >> The meetup planning team will be meeting weekly between now and the >> >> summit to plan all this. If You are interested in helping with that >> >> process, you are welcome to join us in #openstack-operators at 14:00 >> >> UTC >> >> on Tuesdays. We are always ready to welcome new help. >> >> >> >> >> >> >> >> Thanks! >> >> >> >> VW >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> _______________________________________________ >> >> OpenStack-operators mailing list >> >> OpenStack-operators at lists.openstack.org >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> >> >> > >> > >> > _______________________________________________ >> > OpenStack-operators mailing list >> > OpenStack-operators at lists.openstack.org >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> >> >> >> -- >> Cheers, >> ~Blairo >> >> _______________________________________________ >> OpenStack-operators mailing list >> OpenStack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators -- Cheers, ~Blairo From zioproto at gmail.com Thu Oct 6 07:20:45 2016 From: zioproto at gmail.com (Saverio Proto) Date: Thu, 6 Oct 2016 09:20:45 +0200 Subject: [Openstack-operators] Tenant/Project naming restrictions In-Reply-To: References: <8ff8f0dcbe1049189b30acb4c525ca5a@ES01AMSNLNT.srn.sandia.gov> Message-ID: Is the '@' character allowed in the tenant/project names ? Saverio 2016-10-05 23:36 GMT+02:00 Steve Martinelli : > There are some restrictions. > > 1. The project name cannot be longer than 64 characters. > 2. Within a domain, the project name is unique. So you can have project > "foo" in the "default" domain, and in any other domain. > > On Wed, Oct 5, 2016 at 5:16 PM, Vigil, David Gabriel > wrote: >> >> What, if any, are the official tenant/project naming >> requirements/restrictions? I can?t find any documentation that speaks to any >> limitations. Is this documented somewhere? >> >> >> >> >> >> >> >> Dave G Vigil Sr >> >> Systems Integration Analyst Sr/SAIC Lead 09321 >> >> Common Engineering Environment >> >> dgvigi at sandia.gov >> >> 505-284-0157 (office) >> >> SAIC >> >> >> >> >> _______________________________________________ >> OpenStack-operators mailing list >> OpenStack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > From blair.bethwaite at gmail.com Thu Oct 6 09:00:26 2016 From: blair.bethwaite at gmail.com (Blair Bethwaite) Date: Thu, 6 Oct 2016 20:00:26 +1100 Subject: [Openstack-operators] [openstack-hpc] What's the state of openstack-hpc now? In-Reply-To: References: Message-ID: Hi Andrew, Just wanted to quickly say that I really appreciate your prompt reply and hope you'll be happy to assist further if possible. I've just gotten slightly sidetracked by some other issues but will come back to this in the next week and provide more background info and results of workaround attempts. Cheers, Blair On 28 Sep 2016 2:13 AM, "Andrew J Younge" wrote: > Hi Blair, > > I'm very interested to hear more about your project using virtualzed > GPUs, and hopefully JP and/or myself can be of help here. > > So in the past we've struggled with the usage of PCI bridges as a > connector between multiple GPUs. This was first seen with Xen and > S2070 servers (which has 4 older GPUs across Nvidia PCI bridges) and > found that the ACS was prohibiting the successful passthrough of the > GPU. While we just decided to use discrete independent adapters moving > forward, we've never gone back and tried this with KVM. With that, I > can expect the same issues as the ACS cannot guarantee proper > isolation of the device. Looking at the K80 GPUs, I'm seeing that > there are 3 PLX bridges for each GPU pair (see my output below for a > native system w/out KVM), and I'd estimate likely these would be on > the same iommu group. This could be the problem. > > I have heard that such a patch exists in KVM for you to override the > IOMMU groups and ACS protections, however I don't have any experience > with it directly [1]. In our experiments, we used an updated SeaBIOS, > whereas the link provided below details a UEFI BIOS. This may have > different implications that I don't have experience with. > Furthermore, I assume this patch will likely just be ignoring all of > ACS, which is going to be an obvious and potentially severe security > risk. In a purely academic environment such a security risk may not > matter, but it should be noted nonetheless. > > So, lets take a few steps back to confirm things. Are you able to > actually pass both K80 GPUs through to a running KVM instance, and > have the Nvidia drivers loaded? Any dmesg output errors here may go a > long way. Are you also passing through the PCI bridge device (lspci > should show one)? If you're actually making it that far, it may next > be worth simply running a regular CUDA application set first before > trying any GPUDirect methods. For our GPUDirect usage, we were > specifically leveraging the RDMA support with an InfiniBand adapter > rather than CUDA P2P, so your mileage may vary there as well. > > Hopefully this is helpful in finding your problem. With this, I'd be > interested to hear if the ACS override mechanism, or any other option > works for enabling passthrough with K80 GPUs (we have a few dozen > non-virtualized for another project). If you have any other > non-bridged GPU cards (like a K20 or C2075) lying around, it may be > worth giving that a try to try to rule-out other potential issues > first. > > [1] https://wiki.archlinux.org/index.php/PCI_passthrough_via_ > OVMF#Bypassing_the_IOMMU_groups_.28ACS_override_patch.29 > > [root at r-001 ~]# lspci | grep -i -e PLX -e nvidia > 02:00.0 PCI bridge: PLX Technology, Inc. PEX 8747 48-Lane, 5-Port PCI > Express Gen 3 (8.0 GT/s) Switch (rev ca) > 03:08.0 PCI bridge: PLX Technology, Inc. PEX 8747 48-Lane, 5-Port PCI > Express Gen 3 (8.0 GT/s) Switch (rev ca) > 03:10.0 PCI bridge: PLX Technology, Inc. PEX 8747 48-Lane, 5-Port PCI > Express Gen 3 (8.0 GT/s) Switch (rev ca) > 04:00.0 3D controller: NVIDIA Corporation GK210GL [Tesla K80] (rev a1) > 05:00.0 3D controller: NVIDIA Corporation GK210GL [Tesla K80] (rev a1) > 06:00.0 PCI bridge: PLX Technology, Inc. PEX 8747 48-Lane, 5-Port PCI > Express Gen 3 (8.0 GT/s) Switch (rev ca) > 07:08.0 PCI bridge: PLX Technology, Inc. PEX 8747 48-Lane, 5-Port PCI > Express Gen 3 (8.0 GT/s) Switch (rev ca) > 07:10.0 PCI bridge: PLX Technology, Inc. PEX 8747 48-Lane, 5-Port PCI > Express Gen 3 (8.0 GT/s) Switch (rev ca) > 08:00.0 3D controller: NVIDIA Corporation GK210GL [Tesla K80] (rev a1) > 09:00.0 3D controller: NVIDIA Corporation GK210GL [Tesla K80] (rev a1) > 82:00.0 PCI bridge: PLX Technology, Inc. PEX 8747 48-Lane, 5-Port PCI > Express Gen 3 (8.0 GT/s) Switch (rev ca) > 83:08.0 PCI bridge: PLX Technology, Inc. PEX 8747 48-Lane, 5-Port PCI > Express Gen 3 (8.0 GT/s) Switch (rev ca) > 83:10.0 PCI bridge: PLX Technology, Inc. PEX 8747 48-Lane, 5-Port PCI > Express Gen 3 (8.0 GT/s) Switch (rev ca) > 84:00.0 3D controller: NVIDIA Corporation GK210GL [Tesla K80] (rev a1) > 85:00.0 3D controller: NVIDIA Corporation GK210GL [Tesla K80] (rev a1) > 86:00.0 PCI bridge: PLX Technology, Inc. PEX 8747 48-Lane, 5-Port PCI > Express Gen 3 (8.0 GT/s) Switch (rev ca) > 87:08.0 PCI bridge: PLX Technology, Inc. PEX 8747 48-Lane, 5-Port PCI > Express Gen 3 (8.0 GT/s) Switch (rev ca) > 87:10.0 PCI bridge: PLX Technology, Inc. PEX 8747 48-Lane, 5-Port PCI > Express Gen 3 (8.0 GT/s) Switch (rev ca) > 88:00.0 3D controller: NVIDIA Corporation GK210GL [Tesla K80] (rev a1) > 89:00.0 3D controller: NVIDIA Corporation GK210GL [Tesla K80] (rev a1) > [root at r-001 ~]# nvidia-smi topo --matrix > GPU0 GPU1 GPU2 GPU3 GPU4 GPU5 GPU6 GPU7 mlx4_0 CPU Affinity > GPU0 X PIX PHB PHB SOC SOC SOC SOC SOC 0-11,24-35 > GPU1 PIX X PHB PHB SOC SOC SOC SOC SOC 0-11,24-35 > GPU2 PHB PHB X PIX SOC SOC SOC SOC SOC 0-11,24-35 > GPU3 PHB PHB PIX X SOC SOC SOC SOC SOC 0-11,24-35 > GPU4 SOC SOC SOC SOC X PIX PHB PHB PHB 12-23,36-47 > GPU5 SOC SOC SOC SOC PIX X PHB PHB PHB 12-23,36-47 > GPU6 SOC SOC SOC SOC PHB PHB X PIX PHB 12-23,36-47 > GPU7 SOC SOC SOC SOC PHB PHB PIX X PHB 12-23,36-47 > mlx4_0 SOC SOC SOC SOC PHB PHB PHB PHB X > > Legend: > > X = Self > SOC = Path traverses a socket-level link (e.g. QPI) > PHB = Path traverses a PCIe host bridge > PXB = Path traverses multiple PCIe internal switches > PIX = Path traverses a PCIe internal switch > > > Cheers, > Andrew > > > Andrew J. Younge > School of Informatics & Computing > Indiana University / Bloomington, IN USA > ajyounge at indiana.edu / http://ajyounge.com > > > On Tue, Sep 27, 2016 at 4:37 AM, Blair Bethwaite > wrote: > > Hi Andrew, hi John - > > > > I've just started trying to get CUDA P2P working in our virtualized > > HPC environment. I figure this must be something you solved already in > > order to produce the aforementioned paper, but having read it a couple > > of times I don't think it provides enough detail about the guest > > config, hoping you can shed some light... > > > > The issue I'm grappling with is that despite using a qemu-kvm machine > > type (q35) with an emulated PCIe bus and seeing that indeed the P2P > > capable GPUs (NVIDIA K80s) are attached to that bus, and nvidia-smi > > sees them as sharing a PHB, the simpleP2P CUDA sample fails when > > checking their ability to communicate with each other. Is there some > > magic config I might be missing, did you need to make any PCI-ACS > > changes? > > > > Best regards, > > Blair > > > > > > On 16 March 2016 at 07:57, Blair Bethwaite > wrote: > >> > >> Hi Andrew, > >> > >> On 16 March 2016 at 05:28, Andrew J Younge > wrote: > >> > point to a recent publication of ours at VEE15 titled "Supporting High > >> > Performance Molecular Dynamics in Virtualized Clusters using IOMMU, > >> > SR-IOV, and GPUDirect." In the paper we show that using Nvidia GPUs > >> ... > >> > http://dl.acm.org/citation.cfm?id=2731194 > >> > >> Oooh interesting - GPUDirect too. That's something I've been wanting > >> to try out in our environment. Will take a look a your paper... > >> > >> -- > >> Cheers, > >> ~Blairo > > > > > > -- > > Cheers, > > ~Blairo > -------------- next part -------------- An HTML attachment was scrubbed... URL: From piotrmisiak1984 at gmail.com Thu Oct 6 10:22:10 2016 From: piotrmisiak1984 at gmail.com (Piotr Misiak) Date: Thu, 6 Oct 2016 12:22:10 +0200 Subject: [Openstack-operators] [Neutron][Mitaka] very slow network listing Message-ID: Hi guys, I have a very slow network list response time when i add --shared false parameter to cli command. Look at this: http://paste.openstack.org/show/584409/ without --shared False argument I've got response in 2 seconds with --shared False argument I've got response in 32 seconds I debugged a little bit and I see that database returns over 182000 records which is 200MB of data but there are only 4000 unique records. There are more or less 45 duplicates for every unique record and I have 45 records in neutron RBAC so I see a correlation here. The issue is quite important because Horizon uses the request with shared=false to show up the "Launch Instance" form and it takes ages. Anyone has a similar issue? From itzshamail at gmail.com Thu Oct 6 14:22:58 2016 From: itzshamail at gmail.com (Shamail Tahir) Date: Thu, 6 Oct 2016 10:22:58 -0400 Subject: [Openstack-operators] [recognition] AUC Recognition WG Meeting Reminder (10/06) Message-ID: Hi everyone, The AUC recognition WG will be meeting on October 6th, 2016 at 1900 UTC. The details can be found on our wiki page[1]. See you there! *Agenda* * Review status of open action items * Items needed for UC readout * Open [1] https://wiki.openstack.org/wiki/AUCRecognition#Meeting_Information -- Thanks, Shamail Tahir t: @ShamailXD tz: Eastern Time -------------- next part -------------- An HTML attachment was scrubbed... URL: From aschultz at redhat.com Thu Oct 6 16:12:27 2016 From: aschultz at redhat.com (Alex Schultz) Date: Thu, 6 Oct 2016 10:12:27 -0600 Subject: [Openstack-operators] [puppet] Presence at the PTG Message-ID: Hi, We chatted about this a bit in the last meeting[0], but I wanted to send a note to the wider audience. Our initial thought was that the puppet group will not have a specific presence at the upcoming PTG in Atlanta. We don't think we'll have any topics that we can't work through via our traditional irc/email workflows. If anyone has any topics or items that they would like to work through at the upcoming PTG, please let us know and we can revisit this. Thanks, -Alex [0] http://eavesdrop.openstack.org/meetings/puppet_openstack/2016/puppet_openstack.2016-04-19-15.00.html From gord at live.ca Thu Oct 6 19:16:07 2016 From: gord at live.ca (gordon chung) Date: Thu, 6 Oct 2016 19:16:07 +0000 Subject: [Openstack-operators] [telemetry][gnocchi] benchmarking gnocchi v3 Message-ID: hi folks, as announced recently, we released Gnocchi v3[1][2]! this marked a major change in how we process and store data in Gnocchi as we worked on building a truly open source time-series service. as we were building it, i've been benchmarking the results and feeding it back into our development. now that we have a release, i thought i'd share the results of my last benchmarks in some fancy powerpoint[3]. if you don't want a backstory to some of the design changes just jump to slide 15[4] for some comparisons on Gnocchi v2 vs Gnocchi v3. the slides focus only on the performance aspect of Gnocchi but we added other stuff as well to improve the flexibility of the service. feel free to ask me questions on my experience. [1] https://julien.danjou.info/blog/2016/gnocchi-3.0-release [2] http://lists.openstack.org/pipermail/openstack-announce/2016-September/001649.html [3] http://www.slideshare.net/GordonChung/gnocchi-v3 [4] http://www.slideshare.net/GordonChung/gnocchi-v3/15 cheers, -- gord From weblehner at gmail.com Thu Oct 6 20:47:47 2016 From: weblehner at gmail.com (Lukas Lehner) Date: Thu, 6 Oct 2016 22:47:47 +0200 Subject: [Openstack-operators] monitor memory ballooning from within the affected VM Message-ID: Hey http://unix.stackexchange.com/questions/314832/openstack-kvm-monitor-memory-ballooning-from-within-the-affected-vm Lukas -------------- next part -------------- An HTML attachment was scrubbed... URL: From digitalkaran at gmail.com Fri Oct 7 03:26:02 2016 From: digitalkaran at gmail.com (Karan) Date: Thu, 6 Oct 2016 23:26:02 -0400 Subject: [Openstack-operators] [Nova][Scheduler] How to filter and pick Host with least number of instances. In-Reply-To: References: Message-ID: Thanks Mikhail for mentioning custom wieghter. Sure I would like to look at how you've implemented your own weighter. Please share it when you've it with you. Also, it would be helpful if you can give pointers on implementing your weighter based on different metrics of hosts. On Wed, Oct 5, 2016 at 4:11 PM, Mikhail Medvedev wrote: > Hi Karan, > > On Sep 22, 2016 19:19, "Karan" wrote: >> >> Hi >> >> Is it possible to configure openstack scehduler to schedule instances >> to a host with least number of instances running on it? >> When multiple hosts are eligible to spawn a new instance, scheduler >> applies weight multipliers to available RAM and CPU and pick one host. >> Is there a way to ask scheduler to pick a Host with least number y of >> instances on it. >> > > Yes, there is a way to select a host with the least number of instances. It > can be done by writing a custom weighter that returns negated number of > instances as host weight. I wrote an implementation that has been used for a > while in our test cloud, but I am not going to be able to share it until > next week. Let me know if you still need it by then. > >> >> Thanks >> Karan >> >> _______________________________________________ >> OpenStack-operators mailing list >> OpenStack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > -- > Mikhail Medvedev (mmedvede) > IBM, OpenStack CI for KVM on Power From stig.openstack at telfer.org Fri Oct 7 14:33:26 2016 From: stig.openstack at telfer.org (Stig Telfer) Date: Fri, 7 Oct 2016 15:33:26 +0100 Subject: [Openstack-operators] Users of Ceph with Accelio? Message-ID: Hi All - I?m interested in evaluating the performance of Ceph running over Accelio for RDMA data transport. From what I can see on the web, the early work done on adding RDMA support does not appear to have materialised yet into a stable production release. Is there anyone out there who has used Accelio with Ceph and can comment on their experience with it? Many thanks, Stig From dgvigi at sandia.gov Fri Oct 7 15:38:13 2016 From: dgvigi at sandia.gov (Vigil, David Gabriel) Date: Fri, 7 Oct 2016 15:38:13 +0000 Subject: [Openstack-operators] [EXTERNAL] Re: Tenant/Project naming restrictions In-Reply-To: References: <8ff8f0dcbe1049189b30acb4c525ca5a@ES01AMSNLNT.srn.sandia.gov> Message-ID: <2067d47227bb4da7bf7063abbf4a15d1@ES01AMSNLNT.srn.sandia.gov> So, no one knows of official documents on tenant naming restrictions? Dave G Vigil Sr Systems Integration Analyst Sr/SAIC Lead 09321 Common Engineering Environment dgvigi at sandia.gov -----Original Message----- From: Saverio Proto [mailto:zioproto at gmail.com] Sent: Thursday, October 6, 2016 1:21 AM To: Steve Martinelli Cc: Vigil, David Gabriel ; openstack-operators at lists.openstack.org Subject: [EXTERNAL] Re: [Openstack-operators] Tenant/Project naming restrictions Is the '@' character allowed in the tenant/project names ? Saverio 2016-10-05 23:36 GMT+02:00 Steve Martinelli : > There are some restrictions. > > 1. The project name cannot be longer than 64 characters. > 2. Within a domain, the project name is unique. So you can have > project "foo" in the "default" domain, and in any other domain. > > On Wed, Oct 5, 2016 at 5:16 PM, Vigil, David Gabriel > > wrote: >> >> What, if any, are the official tenant/project naming >> requirements/restrictions? I can?t find any documentation that speaks >> to any limitations. Is this documented somewhere? >> >> >> >> >> >> >> >> Dave G Vigil Sr >> >> Systems Integration Analyst Sr/SAIC Lead 09321 >> >> Common Engineering Environment >> >> dgvigi at sandia.gov >> >> 505-284-0157 (office) >> >> SAIC >> >> >> >> >> _______________________________________________ >> OpenStack-operators mailing list >> OpenStack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operato >> rs >> > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operator > s > From clint at fewbar.com Fri Oct 7 16:05:26 2016 From: clint at fewbar.com (Clint Byrum) Date: Fri, 07 Oct 2016 09:05:26 -0700 Subject: [Openstack-operators] [EXTERNAL] Re: Tenant/Project naming restrictions In-Reply-To: <2067d47227bb4da7bf7063abbf4a15d1@ES01AMSNLNT.srn.sandia.gov> References: <8ff8f0dcbe1049189b30acb4c525ca5a@ES01AMSNLNT.srn.sandia.gov> <2067d47227bb4da7bf7063abbf4a15d1@ES01AMSNLNT.srn.sandia.gov> Message-ID: <1475855702-sup-2623@fewbar.com> Sounds like a bug in the API documentation: http://developer.openstack.org/api-ref/identity/v3/?expanded=create-project-detail "name body string The name of the project, which must be unique within the owning domain. A project can have the same name as its domain." Unfortunately, IMO these API's are woefully under-documented. There's no mentioned limit on length. One can assume anything valid in the JSON body as a "string" is valid, so, any utf-8 character should work. In reality, there are limits in the backend storage schema, and likely problems with the wider UTF-8 characters in most peoples' clouds because MySQL doesn't really support > 4 byte UTF-8. I suggest opening a bug against any of the projects that fail to document the limitations in their api-ref. However, we can at least refer to the API tests as what is tested to work: https://github.com/openstack/tempest/blob/master/tempest/api/identity/admin/v3/test_projects_negative.py#L60-L64 Some tests that verify that one cannot save invalid utf-8 chars would be useful there. Excerpts from Vigil, David Gabriel's message of 2016-10-07 15:38:13 +0000: > So, no one knows of official documents on tenant naming restrictions? > > > Dave G Vigil Sr > Systems Integration Analyst Sr/SAIC Lead 09321 > Common Engineering Environment > dgvigi at sandia.gov > > -----Original Message----- > From: Saverio Proto [mailto:zioproto at gmail.com] > Sent: Thursday, October 6, 2016 1:21 AM > To: Steve Martinelli > Cc: Vigil, David Gabriel ; openstack-operators at lists.openstack.org > Subject: [EXTERNAL] Re: [Openstack-operators] Tenant/Project naming restrictions > > Is the '@' character allowed in the tenant/project names ? > > Saverio > > 2016-10-05 23:36 GMT+02:00 Steve Martinelli : > > There are some restrictions. > > > > 1. The project name cannot be longer than 64 characters. > > 2. Within a domain, the project name is unique. So you can have > > project "foo" in the "default" domain, and in any other domain. > > > > On Wed, Oct 5, 2016 at 5:16 PM, Vigil, David Gabriel > > > > wrote: > >> > >> What, if any, are the official tenant/project naming > >> requirements/restrictions? I can?t find any documentation that speaks > >> to any limitations. Is this documented somewhere? > >> > >> > >> > >> > >> > >> > >> > >> Dave G Vigil Sr > >> > >> Systems Integration Analyst Sr/SAIC Lead 09321 > >> > >> Common Engineering Environment > >> > >> dgvigi at sandia.gov > >> > >> 505-284-0157 (office) > >> > >> SAIC > >> > >> > >> > >> > >> _______________________________________________ > >> OpenStack-operators mailing list > >> OpenStack-operators at lists.openstack.org > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operato > >> rs > >> > > > > > > _______________________________________________ > > OpenStack-operators mailing list > > OpenStack-operators at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operator > > s > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators From s.martinelli at gmail.com Fri Oct 7 16:08:10 2016 From: s.martinelli at gmail.com (Steve Martinelli) Date: Fri, 7 Oct 2016 12:08:10 -0400 Subject: [Openstack-operators] [EXTERNAL] Re: Tenant/Project naming restrictions In-Reply-To: <2067d47227bb4da7bf7063abbf4a15d1@ES01AMSNLNT.srn.sandia.gov> References: <8ff8f0dcbe1049189b30acb4c525ca5a@ES01AMSNLNT.srn.sandia.gov> <2067d47227bb4da7bf7063abbf4a15d1@ES01AMSNLNT.srn.sandia.gov> Message-ID: @'s should work just fine: (test_osc) $ openstack project create test at test +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | None | | enabled | True | | id | 730b3aaca18a4a37acd2cdf1d4b5f35f | | name | test at test | +-------------+----------------------------------+ (test_osc) $ openstack project show test at test +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | None | | enabled | True | | id | 730b3aaca18a4a37acd2cdf1d4b5f35f | | name | test at test | | properties | | +-------------+----------------------------------+ With UTF8 your mileage will vary (test_osc) $ openstack project create ? An unexpected error prevented the server from fulfilling your request. (HTTP 500) (Request-ID: req-ca2a0c60-8c20-40dd-8ca6-49a9ecbf9447) (test_osc) $ openstack project create "elni?o" +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | None | | enabled | True | | id | b87bdafc6d32449bb1147f1acdcd3f49 | | name | elni?o | +-------------+----------------------------------+ (test_osc) $ openstack project show "elni?o" +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | None | | enabled | True | | id | b87bdafc6d32449bb1147f1acdcd3f49 | | name | elni?o | | properties | | +-------------+----------------------------------+ On Fri, Oct 7, 2016 at 11:38 AM, Vigil, David Gabriel wrote: > So, no one knows of official documents on tenant naming restrictions? > > > Dave G Vigil Sr > Systems Integration Analyst Sr/SAIC Lead 09321 > Common Engineering Environment > dgvigi at sandia.gov > > -----Original Message----- > From: Saverio Proto [mailto:zioproto at gmail.com] > Sent: Thursday, October 6, 2016 1:21 AM > To: Steve Martinelli > Cc: Vigil, David Gabriel ; openstack-operators at lists. > openstack.org > Subject: [EXTERNAL] Re: [Openstack-operators] Tenant/Project naming > restrictions > > Is the '@' character allowed in the tenant/project names ? > > Saverio > > 2016-10-05 23:36 GMT+02:00 Steve Martinelli : > > There are some restrictions. > > > > 1. The project name cannot be longer than 64 characters. > > 2. Within a domain, the project name is unique. So you can have > > project "foo" in the "default" domain, and in any other domain. > > > > On Wed, Oct 5, 2016 at 5:16 PM, Vigil, David Gabriel > > > > wrote: > >> > >> What, if any, are the official tenant/project naming > >> requirements/restrictions? I can?t find any documentation that speaks > >> to any limitations. Is this documented somewhere? > >> > >> > >> > >> > >> > >> > >> > >> Dave G Vigil Sr > >> > >> Systems Integration Analyst Sr/SAIC Lead 09321 > >> > >> Common Engineering Environment > >> > >> dgvigi at sandia.gov > >> > >> 505-284-0157 (office) > >> > >> SAIC > >> > >> > >> > >> > >> _______________________________________________ > >> OpenStack-operators mailing list > >> OpenStack-operators at lists.openstack.org > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operato > >> rs > >> > > > > > > _______________________________________________ > > OpenStack-operators mailing list > > OpenStack-operators at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operator > > s > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mihailmed at gmail.com Mon Oct 10 13:02:19 2016 From: mihailmed at gmail.com (Mikhail Medvedev) Date: Mon, 10 Oct 2016 08:02:19 -0500 Subject: [Openstack-operators] [Nova][Scheduler] How to filter and pick Host with least number of instances. In-Reply-To: References: Message-ID: On Thu, Oct 6, 2016 at 10:26 PM, Karan wrote: > Thanks Mikhail for mentioning custom wieghter. Sure I would like to > look at how you've implemented your own weighter. Please share it when > you've it with you. See [1] for a very simple implementation. > Also, it would be helpful if you can give > pointers on implementing your weighter based on different metrics of > hosts. You can look into what is available through host_state variable if you need to weigh based on something else. See [2] (seems out of date). You can also look/grep through nova scheduler source code. [1] https://github.com/mmedvede/scheduler-weights [2] https://wiki.openstack.org/wiki/Nova-scheduler-HostState > > On Wed, Oct 5, 2016 at 4:11 PM, Mikhail Medvedev wrote: >> Hi Karan, >> >> On Sep 22, 2016 19:19, "Karan" wrote: >>> >>> Hi >>> >>> Is it possible to configure openstack scehduler to schedule instances >>> to a host with least number of instances running on it? >>> When multiple hosts are eligible to spawn a new instance, scheduler >>> applies weight multipliers to available RAM and CPU and pick one host. >>> Is there a way to ask scheduler to pick a Host with least number y of >>> instances on it. >>> >> >> Yes, there is a way to select a host with the least number of instances. It >> can be done by writing a custom weighter that returns negated number of >> instances as host weight. I wrote an implementation that has been used for a >> while in our test cloud, but I am not going to be able to share it until >> next week. Let me know if you still need it by then. >> >>> >>> Thanks >>> Karan >>> >>> _______________________________________________ >>> OpenStack-operators mailing list >>> OpenStack-operators at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> >> -- >> Mikhail Medvedev (mmedvede) >> IBM, OpenStack CI for KVM on Power From adam.kijak at corp.ovh.com Mon Oct 10 13:29:39 2016 From: adam.kijak at corp.ovh.com (Adam Kijak) Date: Mon, 10 Oct 2016 13:29:39 +0000 Subject: [Openstack-operators] [openstack-operators][ceph][nova] How do you handle Nova on Ceph? Message-ID: <85c30a1d875a437d9394e70ab5f4058a@corp.ovh.com> Hello, We use a Ceph cluster for Nova (Glance and Cinder as well) and over time, more and more data is stored there. We can't keep the cluster so big because of Ceph's limitations. Sooner or later it needs to be closed for adding new instances, images and volumes. Not to mention it's a big failure domain. How do you handle this issue? What is your strategy to divide Ceph clusters between compute nodes? How do you solve VM snapshot placement and migration issues then (snapshots will be left on older Ceph)? We've been thinking about features like: dynamic Ceph configuration (not static like in nova.conf) in Nova, pinning instances to a Ceph cluster etc. What do you think about that? -------------- next part -------------- An HTML attachment was scrubbed... URL: From iurygregory at gmail.com Mon Oct 10 15:39:48 2016 From: iurygregory at gmail.com (Iury Gregory) Date: Mon, 10 Oct 2016 12:39:48 -0300 Subject: [Openstack-operators] [puppet] Standardization of Keystone authtoken Message-ID: Hello everyone, *tl;dr: *We will be removing all old parameters to configure the keystone_authtoken section that was deprecated during the Newton Cycle. In Ocata you will need to use the authtoken class for each module, except for some modules that do not provide the support yet, for more details see this bug [0]. During the Newton cycle we have worked to bring the capability to configure the [keystone_authtoken] section with new parameters using the puppet-keystone resource [1] , each module has its own class to configure the section in configuration files using. We had deprecated all other old parameters and this change is 100% backward compatible, some modules haven't this change, please see this comment [2] for the bug [0]. Now in Ocata we are removing the deprecated parameters, so you will need to use the authtoken class in your manifests, the class should be declared before the api services are configured, for example see [3]. [0] https://bugs.launchpad.net/puppet-aodh/+bug/1604463 [1] https://github.com/openstack/puppet-keystone/blob/ master/manifests/resource/authtoken.pp [2] https://bugs.launchpad.net/puppet-aodh/+bug/1604463/comments/8 [3] https://github.com/openstack/puppet-openstack-integration/ blob/master/manifests/aodh.pp#L57-L68 -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ *Att[]'sIury Gregory Melo Ferreira * *Master student in Computer Science at UFCG* *Part of the puppet-manager-core team in OpenStack* *E-mail: iurygregory at gmail.com * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonmills at gmail.com Mon Oct 10 17:36:48 2016 From: jonmills at gmail.com (Jonathan Mills) Date: Mon, 10 Oct 2016 13:36:48 -0400 Subject: [Openstack-operators] Custom VM FQDNs and DNS integration Message-ID: <3968997F-E5A2-4B88-86D6-35013F824925@icloud.com> What I would like to see in terms of the FQDNs of my VMs is something like this: %{hostname}.%{tenant-name}.example.com Does anyone know the easiest way to make this possible? From my browsing, the tenant-name part doesn?t seem like a native capability. I had been wondering if achieving this was going to require a post-boot setting. For instance, using cloud-init to pull info from the metadata service to set the FQDN. But I don?t even see where the tenant name is part of the metadata. Then again, I?m not so familiar with these corners of OpenStack. In terms of DNS integration, if I could make Nova create the FQDNs I want, then I assume I could get Designate to do the right thing. But if I end up setting the VMs? FQDNs as a post-boot function, then it seems like I would need a dynamic DNS update to happen?.and probably not run Designate. Am I on the right path? What are others running? From xavpaice at gmail.com Mon Oct 10 18:41:13 2016 From: xavpaice at gmail.com (Xav Paice) Date: Tue, 11 Oct 2016 07:41:13 +1300 Subject: [Openstack-operators] [openstack-operators][ceph][nova] How do you handle Nova on Ceph? In-Reply-To: <85c30a1d875a437d9394e70ab5f4058a@corp.ovh.com> References: <85c30a1d875a437d9394e70ab5f4058a@corp.ovh.com> Message-ID: <1476124873.6209.3.camel@gmail.com> On Mon, 2016-10-10 at 13:29 +0000, Adam Kijak wrote: > Hello, > > We use a Ceph cluster for Nova (Glance and Cinder as well) and over > time, > more?and more data is stored there. We can't keep the cluster so?big > because of? > Ceph's limitations. Sooner or later it needs to be closed for adding > new? > instances, images and volumes. Not to mention it's a big failure > domain. I'm really keen to hear more about those limitations. > > How do you handle this issue? > What is your strategy to divide Ceph clusters between compute nodes? > How do you solve VM snapshot placement and migration issues then > (snapshots will be left on older Ceph)? Having played with Ceph and compute on the same hosts, I'm a big fan of separating them and having dedicated Ceph hosts, and dedicated compute hosts. ?That allows me a lot more flexibility with hardware configuration and maintenance, easier troubleshooting for resource contention, and also allows scaling at different rates. > > We've been thinking about features like: dynamic Ceph configuration > (not static like in nova.conf) in Nova, pinning instances to a Ceph > cluster etc. > What do you think about that? > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operato > rs From matt at mattfischer.com Mon Oct 10 18:55:55 2016 From: matt at mattfischer.com (Matt Fischer) Date: Mon, 10 Oct 2016 12:55:55 -0600 Subject: [Openstack-operators] Custom VM FQDNs and DNS integration In-Reply-To: <3968997F-E5A2-4B88-86D6-35013F824925@icloud.com> References: <3968997F-E5A2-4B88-86D6-35013F824925@icloud.com> Message-ID: The last time I tried this, which was probably 18 months ago to be fair, there is no way for the VM to get it's own tenant name. You could pass it in with cloud-init if you want but its not in the metadata that I recall. For Designate however I don't know why you'd want this. You want the format above for the DNS record, but you don't need it for the VM name. My VMs are called (for example): fred & bob and so Nova sees "fred" and "bob" but Designate has: fred.tenant-subdomain.foo.com bob.tenant-subdomain.foo.com In other words, I don't think the name in nova and the DNS record need to match for this to work the way you want it to. On Mon, Oct 10, 2016 at 11:36 AM, Jonathan Mills wrote: > What I would like to see in terms of the FQDNs of my VMs is something like > this: > > %{hostname}.%{tenant-name}.example.com > > Does anyone know the easiest way to make this possible? From my browsing, > the tenant-name part doesn?t seem like a native capability. I had been > wondering if achieving this was going to require a post-boot setting. For > instance, using cloud-init to pull info from the metadata service to set > the FQDN. But I don?t even see where the tenant name is part of the > metadata. Then again, I?m not so familiar with these corners of OpenStack. > > In terms of DNS integration, if I could make Nova create the FQDNs I want, > then I assume I could get Designate to do the right thing. But if I end up > setting the VMs? FQDNs as a post-boot function, then it seems like I would > need a dynamic DNS update to happen?.and probably not run Designate. Am I > on the right path? What are others running? > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alopgeek at gmail.com Mon Oct 10 19:57:16 2016 From: alopgeek at gmail.com (Abel Lopez) Date: Mon, 10 Oct 2016 12:57:16 -0700 Subject: [Openstack-operators] [openstack-operators][ceph][nova] How do you handle Nova on Ceph? In-Reply-To: <85c30a1d875a437d9394e70ab5f4058a@corp.ovh.com> References: <85c30a1d875a437d9394e70ab5f4058a@corp.ovh.com> Message-ID: <77C16534-E8D7-4657-8691-FC02D91CF5D4@gmail.com> Have you thought about dedicated pools for cinder/nova and a separate pool for glance, and any other uses you might have? You need to setup secrets on kvm, but you can have cinder creating volumes from glance images quickly in different pools > On Oct 10, 2016, at 6:29 AM, Adam Kijak wrote: > > Hello, > > We use a Ceph cluster for Nova (Glance and Cinder as well) and over time, > more and more data is stored there. We can't keep the cluster so big because of > Ceph's limitations. Sooner or later it needs to be closed for adding new > instances, images and volumes. Not to mention it's a big failure domain. > > How do you handle this issue? > What is your strategy to divide Ceph clusters between compute nodes? > How do you solve VM snapshot placement and migration issues then > (snapshots will be left on older Ceph)? > > We've been thinking about features like: dynamic Ceph configuration > (not static like in nova.conf) in Nova, pinning instances to a Ceph cluster etc. > What do you think about that? > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators From Peter.Hamilton at jhuapl.edu Tue Oct 11 17:31:33 2016 From: Peter.Hamilton at jhuapl.edu (Hamilton, Peter A.) Date: Tue, 11 Oct 2016 17:31:33 +0000 Subject: [Openstack-operators] [openstack-operators][nova] Options for using trusted certificates in Nova image signature verification Message-ID: Hi everyone, We are interested in improving user trust in the cloud and are currently working on certificate validation for image signatures in Nova. Users can upload signed images to Glance and then boot them via Nova, with Nova validating the image signature when it downloads the image. Adding certificate validation to this process helps users be confident that the signed image they are booting is the image they expect it to be. For clarity, here is a brief description about how the feature works: 1. User signs an image with their signing certificate 2. User uploads the cert to a certificate manager (e.g. Barbican) 3. User uploads the signed image to Glance with signature metadata 4. User tells Nova to create a new server using the signed image 5. Nova retrieves the image and image metadata 6. Nova retrieves the signing certificate 7. Nova conducts signature verification 8. If successful, Nova creates the new server With the addition of certificate validation, this workflow changes slightly. ... 6. Nova retrieves the signing certificate 7. Nova obtains the trusted certificate (which signed the signing cert) 8. Nova conducts certificate validation 9. If successful, Nova conducts signature verification 10. If successful, Nova creates the new server Given that this is a feature that spans services and has an impact on the end-user, we want to get as much feedback from the community as possible, including from developers and operators. Here are the current options for supporting certificate validation in Nova, specifically dealing with how to obtain the trusted certificate: 1. Update the Nova API to let the user pass in the certificate ID The current proposed approach is to update the Nova API for the server create call to allow the user to specify the ID of the trusted certificate to use when validating the image signature. This places the user in-the-loop, requiring them to provide a new specific piece of information to successfully boot a signed image. However, there are no easy ways to find the trusted certificate ID for an arbitrary signed image. If different users sign and boot the same image, out-of-band communication and metadata storage would be needed. 2. Add support for a certificate trust store to define trusted certs Another approach adds support for a certificate trust store, which contains a set of trusted signing certificates that are accessed when verifying the image signature. When Nova conducts image signature verification, it would pull in the certificates in the trust store and search through them until it finds the certificate that signed the image's signing certificate. If Nova cannot find it, boot fails. 3. Use a hybrid approach of both #1 and #2 A third approach is a hybridization of the above two approaches, leveraging the benefits of both while minimizing their drawbacks. The Nova API would be updated to allow the user to pass the trusted certificate ID when creating a new server. However, if the user did not provide this ID Nova would pull the trusted certificates from the certificate trust store to fill the gap. Either the user-provided trusted certificate (preferred) or the set of certificates pulled from the certificate trust store (backup) can then be used as needed. There are benefits and downsides to each approach. If you are interested in further details, we are in the process of updating an Ocata Nova spec for this feature, linked below: https://review.openstack.org/#/c/357151/ We are interested in your thoughts on certificate validation, specifically on which of the above three options you prefer and why. Thank you for your time, Peter Hamilton From sbezverk at cisco.com Tue Oct 11 17:36:16 2016 From: sbezverk at cisco.com (Serguei Bezverkhi (sbezverk)) Date: Tue, 11 Oct 2016 17:36:16 +0000 Subject: [Openstack-operators] [openstack-operators][nova] Options for using trusted certificates in Nova image signature verification In-Reply-To: References: Message-ID: <8b18347d1bd846a9bf98939c8858a3cb@XCH-ALN-006.cisco.com> In some cases an image uploaded into glance need modification, example some attributes. How do you plan to handle cases like this? Are you planning this tool to update image signature? Serguei -----Original Message----- From: Hamilton, Peter A. [mailto:Peter.Hamilton at jhuapl.edu] Sent: Tuesday, October 11, 2016 1:32 PM To: openstack-operators at lists.openstack.org Subject: [Openstack-operators] [openstack-operators][nova] Options for using trusted certificates in Nova image signature verification Hi everyone, We are interested in improving user trust in the cloud and are currently working on certificate validation for image signatures in Nova. Users can upload signed images to Glance and then boot them via Nova, with Nova validating the image signature when it downloads the image. Adding certificate validation to this process helps users be confident that the signed image they are booting is the image they expect it to be. For clarity, here is a brief description about how the feature works: 1. User signs an image with their signing certificate 2. User uploads the cert to a certificate manager (e.g. Barbican) 3. User uploads the signed image to Glance with signature metadata 4. User tells Nova to create a new server using the signed image 5. Nova retrieves the image and image metadata 6. Nova retrieves the signing certificate 7. Nova conducts signature verification 8. If successful, Nova creates the new server With the addition of certificate validation, this workflow changes slightly. ... 6. Nova retrieves the signing certificate 7. Nova obtains the trusted certificate (which signed the signing cert) 8. Nova conducts certificate validation 9. If successful, Nova conducts signature verification 10. If successful, Nova creates the new server Given that this is a feature that spans services and has an impact on the end-user, we want to get as much feedback from the community as possible, including from developers and operators. Here are the current options for supporting certificate validation in Nova, specifically dealing with how to obtain the trusted certificate: 1. Update the Nova API to let the user pass in the certificate ID The current proposed approach is to update the Nova API for the server create call to allow the user to specify the ID of the trusted certificate to use when validating the image signature. This places the user in-the-loop, requiring them to provide a new specific piece of information to successfully boot a signed image. However, there are no easy ways to find the trusted certificate ID for an arbitrary signed image. If different users sign and boot the same image, out-of-band communication and metadata storage would be needed. 2. Add support for a certificate trust store to define trusted certs Another approach adds support for a certificate trust store, which contains a set of trusted signing certificates that are accessed when verifying the image signature. When Nova conducts image signature verification, it would pull in the certificates in the trust store and search through them until it finds the certificate that signed the image's signing certificate. If Nova cannot find it, boot fails. 3. Use a hybrid approach of both #1 and #2 A third approach is a hybridization of the above two approaches, leveraging the benefits of both while minimizing their drawbacks. The Nova API would be updated to allow the user to pass the trusted certificate ID when creating a new server. However, if the user did not provide this ID Nova would pull the trusted certificates from the certificate trust store to fill the gap. Either the user-provided trusted certificate (preferred) or the set of certificates pulled from the certificate trust store (backup) can then be used as needed. There are benefits and downsides to each approach. If you are interested in further details, we are in the process of updating an Ocata Nova spec for this feature, linked below: https://review.openstack.org/#/c/357151/ We are interested in your thoughts on certificate validation, specifically on which of the above three options you prefer and why. Thank you for your time, Peter Hamilton _______________________________________________ OpenStack-operators mailing list OpenStack-operators at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators From Peter.Hamilton at jhuapl.edu Tue Oct 11 18:10:03 2016 From: Peter.Hamilton at jhuapl.edu (Hamilton, Peter A.) Date: Tue, 11 Oct 2016 18:10:03 +0000 Subject: [Openstack-operators] [openstack-operators][nova] Options for using trusted certificates in Nova image signature verification In-Reply-To: <8b18347d1bd846a9bf98939c8858a3cb@XCH-ALN-006.cisco.com> References: <8b18347d1bd846a9bf98939c8858a3cb@XCH-ALN-006.cisco.com> Message-ID: If you need to update signed image data, you just need to make sure that you also update the signature properties associated with that image. Updating just the signed image itself would make it inconsistent with the preexisting image signature, meaning that image signature verification would fail when Nova attempts to boot the updated signed image. This is orthogonal to the issues of certificate validation, but is still important to clarify. Thanks for asking! Peter On 10/11/16, 1:36 PM, "Serguei Bezverkhi (sbezverk)" wrote: >In some cases an image uploaded into glance need modification, example >some attributes. How do you plan to handle cases like this? Are you >planning this tool to update image signature? > >Serguei > >-----Original Message----- >From: Hamilton, Peter A. [mailto:Peter.Hamilton at jhuapl.edu] >Sent: Tuesday, October 11, 2016 1:32 PM >To: openstack-operators at lists.openstack.org >Subject: [Openstack-operators] [openstack-operators][nova] Options for >using trusted certificates in Nova image signature verification > >Hi everyone, > >We are interested in improving user trust in the cloud and are currently >working on certificate validation for image signatures in Nova. Users can >upload signed images to Glance and then boot them via Nova, with Nova >validating the image signature when it downloads the image. Adding >certificate validation to this process helps users be confident that the >signed image they are booting is the image they expect it to be. > >For clarity, here is a brief description about how the feature works: > >1. User signs an image with their signing certificate 2. User uploads the >cert to a certificate manager (e.g. Barbican) 3. User uploads the signed >image to Glance with signature metadata 4. User tells Nova to create a >new server using the signed image 5. Nova retrieves the image and image >metadata 6. Nova retrieves the signing certificate 7. Nova conducts >signature verification 8. If successful, Nova creates the new server > >With the addition of certificate validation, this workflow changes >slightly. > >... >6. Nova retrieves the signing certificate 7. Nova obtains the trusted >certificate (which signed the signing cert) 8. Nova conducts certificate >validation 9. If successful, Nova conducts signature verification 10. If >successful, Nova creates the new server > >Given that this is a feature that spans services and has an impact on the >end-user, we want to get as much feedback from the community as possible, >including from developers and operators. > >Here are the current options for supporting certificate validation in >Nova, specifically dealing with how to obtain the trusted certificate: > >1. Update the Nova API to let the user pass in the certificate ID > >The current proposed approach is to update the Nova API for the server >create call to allow the user to specify the ID of the trusted >certificate to use when validating the image signature. This places the >user in-the-loop, requiring them to provide a new specific piece of >information to successfully boot a signed image. However, there are no >easy ways to find the trusted certificate ID for an arbitrary signed >image. If different users sign and boot the same image, out-of-band >communication and metadata storage would be needed. > >2. Add support for a certificate trust store to define trusted certs > >Another approach adds support for a certificate trust store, which >contains a set of trusted signing certificates that are accessed when >verifying the image signature. When Nova conducts image signature >verification, it would pull in the certificates in the trust store and >search through them until it finds the certificate that signed the >image's signing certificate. If Nova cannot find it, boot fails. > >3. Use a hybrid approach of both #1 and #2 > >A third approach is a hybridization of the above two approaches, >leveraging the benefits of both while minimizing their drawbacks. The >Nova API would be updated to allow the user to pass the trusted >certificate ID when creating a new server. However, if the user did not >provide this ID Nova would pull the trusted certificates from the >certificate trust store to fill the gap. Either the user-provided trusted >certificate (preferred) or the set of certificates pulled from the >certificate trust store (backup) can then be used as needed. > >There are benefits and downsides to each approach. If you are interested >in further details, we are in the process of updating an Ocata Nova spec >for this feature, linked below: > >https://review.openstack.org/#/c/357151/ > >We are interested in your thoughts on certificate validation, >specifically on which of the above three options you prefer and why. > >Thank you for your time, >Peter Hamilton > > >_______________________________________________ >OpenStack-operators mailing list >OpenStack-operators at lists.openstack.org >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators From stig.openstack at telfer.org Tue Oct 11 22:31:28 2016 From: stig.openstack at telfer.org (Stig Telfer) Date: Tue, 11 Oct 2016 23:31:28 +0100 Subject: [Openstack-operators] [scientific][scientific-wg] Reminder: Scientific WG meeting Wednesday 0900 UTC Message-ID: <4DD21E32-1544-4963-A8BB-93291D31D666@telfer.org> Hi everyone - We have a Scientific WG IRC meeting on Wednesday at 0900 UTC on channel #openstack-meeting. The agenda is available here[1] and full IRC meeting details are here[2]. This week we have our confirmed slots on the Barcelona summit schedule, plus other planning besides. Also Blair?s going to follow up on his recent work on hypervisor tuning. We are also hoping to have Phil Kershaw, chair of the Cloud Working Group of the Research Councils UK, to talk about an upcoming cloud workshop on OpenStack for research computing hosted at the Francis Crick institute in London. If anyone would like to add an item for discussion on the agenda, it is also available in an etherpad[3]. Best wishes, Stig [1] https://wiki.openstack.org/wiki/Scientific_working_group#IRC_Meeting_October_12th_2016 [2] http://eavesdrop.openstack.org/#Scientific_Working_Group [3] https://etherpad.openstack.org/p/Scientific-WG-next-meeting-agenda From stig.openstack at telfer.org Tue Oct 11 22:36:47 2016 From: stig.openstack at telfer.org (Stig Telfer) Date: Tue, 11 Oct 2016 23:36:47 +0100 Subject: [Openstack-operators] [scientific] Barcelona Scientific BoF: Call for lightning talks Message-ID: <58E7BD20-D0D1-4480-8ED3-B2B1FC4E1397@telfer.org> Hello all - We have our schedule confirmed and will be having a BoF for Scientific OpenStack users at 2:15pm on the summit Wednesday: https://www.openstack.org/summit/barcelona-2016/summit-schedule/events/16779/scientific-working-group-bof-and-poster-session We are planning to run some lightning talks in this session, typically up to 5 minutes long. If you or your institution have been implementing some bright ideas that take OpenStack into new territory for research computing use cases, lets hear it! Please follow up to me and Blair (Scientific WG co-chairs) if you?re interested in speaking and would like to bag a slot. Best wishes, Stig From adam.kijak at corp.ovh.com Wed Oct 12 12:23:41 2016 From: adam.kijak at corp.ovh.com (Adam Kijak) Date: Wed, 12 Oct 2016 12:23:41 +0000 Subject: [Openstack-operators] [openstack-operators][ceph][nova] How do you handle Nova on Ceph? In-Reply-To: <1476124873.6209.3.camel@gmail.com> References: <85c30a1d875a437d9394e70ab5f4058a@corp.ovh.com>, <1476124873.6209.3.camel@gmail.com> Message-ID: <839b3aba73394bf9aae56c801687e50c@corp.ovh.com> > ________________________________________ > From: Xav Paice > Sent: Monday, October 10, 2016 8:41 PM > To: openstack-operators at lists.openstack.org > Subject: Re: [Openstack-operators] [openstack-operators][ceph][nova] How do you handle Nova on Ceph? > > On Mon, 2016-10-10 at 13:29 +0000, Adam Kijak wrote: > > Hello, > > > > We use a Ceph cluster for Nova (Glance and Cinder as well) and over > > time, > > more and more data is stored there. We can't keep the cluster so big > > because of > > Ceph's limitations. Sooner or later it needs to be closed for adding > > new > > instances, images and volumes. Not to mention it's a big failure > > domain. > > I'm really keen to hear more about those limitations. Basically it's all related to the failure domain ("blast radius") and risk management. Bigger Ceph cluster means more users. Growing the Ceph cluster temporary slows it down, so many users will be affected. There are bugs in Ceph which can cause data corruption. It's rare, but when it happens it can affect many (maybe all) users of the Ceph cluster. > > > > How do you handle this issue? > > What is your strategy to divide Ceph clusters between compute nodes? > > How do you solve VM snapshot placement and migration issues then > > (snapshots will be left on older Ceph)? > > Having played with Ceph and compute on the same hosts, I'm a big fan of > separating them and having dedicated Ceph hosts, and dedicated compute > hosts. That allows me a lot more flexibility with hardware > configuration and maintenance, easier troubleshooting for resource > contention, and also allows scaling at different rates. Exactly, I consider it the best practice as well. From lutz.birkhahn at noris.de Wed Oct 12 12:25:38 2016 From: lutz.birkhahn at noris.de (Lutz Birkhahn) Date: Wed, 12 Oct 2016 12:25:38 +0000 Subject: [Openstack-operators] Ubuntu package for Octavia Message-ID: Has anyone seen Ubuntu packages for Octavia yet? We?re running Ubuntu 16.04 with Newton, but for whatever reason I can not find any Octavia package? So far I?ve only found in https://wiki.openstack.org/wiki/Neutron/LBaaS/HowToRun the following: Ubuntu Packages Setup: Install octavia with your favorite distribution: ?pip install octavia? That was not exactly what we would like to do in our production cloud? Thanks, /lutz -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/x-pkcs7-signature Size: 6404 bytes Desc: not available URL: From adam.kijak at corp.ovh.com Wed Oct 12 12:35:48 2016 From: adam.kijak at corp.ovh.com (Adam Kijak) Date: Wed, 12 Oct 2016 12:35:48 +0000 Subject: [Openstack-operators] [openstack-operators][ceph][nova] How do you handle Nova on Ceph? In-Reply-To: <77C16534-E8D7-4657-8691-FC02D91CF5D4@gmail.com> References: <85c30a1d875a437d9394e70ab5f4058a@corp.ovh.com>, <77C16534-E8D7-4657-8691-FC02D91CF5D4@gmail.com> Message-ID: <29c531e1c0614dc1bd1cf587d69aa45b@corp.ovh.com> > _______________________________________ > From: Abel Lopez > Sent: Monday, October 10, 2016 9:57 PM > To: Adam Kijak > Cc: openstack-operators > Subject: Re: [Openstack-operators] [openstack-operators][ceph][nova] How do you handle Nova on Ceph? > > Have you thought about dedicated pools for cinder/nova and a separate pool for glance, and any other uses you might have? > You need to setup secrets on kvm, but you can have cinder creating volumes from glance images quickly in different pools We already have separate pool for images, volumes and instances. Separate pools doesn't really split the failure domain though. Also AFAIK you can't set up multiple pools for instances in nova.conf, right? From mriedem at linux.vnet.ibm.com Wed Oct 12 14:00:11 2016 From: mriedem at linux.vnet.ibm.com (Matt Riedemann) Date: Wed, 12 Oct 2016 09:00:11 -0500 Subject: [Openstack-operators] [nova] Does anyone use the os-diagnostics API? Message-ID: <5dae5c89-b682-15c7-11c6-d9a5481076a4@linux.vnet.ibm.com> The current form of the nova os-diagnostics API is hypervisor-specific, which makes it pretty unusable in any generic way, which is why Tempest doesn't test it. Way back when the v3 API was a thing for 2 minutes there was work done to standardize the diagnostics information across virt drivers in nova. The only thing is we haven't exposed that out of the REST API yet, but there is a spec proposing to do that now: https://review.openstack.org/#/c/357884/ This is an admin-only API so we're trying to keep an end user point of view out of discussing it. For example, the disk details don't have any unique identifier. We could add one, but would it be useful to an admin? This API is really supposed to be for debug, but the question I have for this list is does anyone actually use the existing os-diagnostics API? And if so, how do you use it, and what information is most useful? If you are using it, please review the spec and provide any input on what's proposed for outputs. -- Thanks, Matt Riedemann From Ulrich.Kleber at huawei.com Wed Oct 12 14:17:50 2016 From: Ulrich.Kleber at huawei.com (Ulrich Kleber) Date: Wed, 12 Oct 2016 14:17:50 +0000 Subject: [Openstack-operators] OPNFV delivered its new Colorado release Message-ID: <884BFBB6F562F44F91BF83F77C24E9972E7EB5F0@lhreml507-mbx> Hi, I didn't see an official announcement, so I like to point you to the new release of OPNFV. https://www.opnfv.org/news-faq/press-release/2016/09/open-source-nfv-project-delivers-third-platform-release-introduces-0 OPNFV is an open source project and one of the most important users of OpenStack in the Telecom/NFV area. Maybe it is interesting for your work. Feel free to contact me or meet during the Barcelona summit at the session of the OpenStack Operators Telecom/NFV Functional Team (https://www.openstack.org/summit/barcelona-2016/summit-schedule/events/16768/openstack-operators-telecomnfv-functional-team). Cheers, Uli Ulrich KLEBER Chief Architect Cloud Platform European Research Center IT R&D Division [huawei_logo] Riesstra?e 25 80992 M?nchen Mobile: +49 (0)173 4636144 Mobile (China): +86 13005480404 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 6737 bytes Desc: image001.jpg URL: From Tim.Bell at cern.ch Wed Oct 12 14:35:54 2016 From: Tim.Bell at cern.ch (Tim Bell) Date: Wed, 12 Oct 2016 14:35:54 +0000 Subject: [Openstack-operators] [nova] Does anyone use the os-diagnostics API? In-Reply-To: <5dae5c89-b682-15c7-11c6-d9a5481076a4@linux.vnet.ibm.com> References: <5dae5c89-b682-15c7-11c6-d9a5481076a4@linux.vnet.ibm.com> Message-ID: <248C8965-1ECE-4CA3-9B88-A7C75CF8B3AD@cern.ch> > On 12 Oct 2016, at 07:00, Matt Riedemann wrote: > > The current form of the nova os-diagnostics API is hypervisor-specific, which makes it pretty unusable in any generic way, which is why Tempest doesn't test it. > > Way back when the v3 API was a thing for 2 minutes there was work done to standardize the diagnostics information across virt drivers in nova. The only thing is we haven't exposed that out of the REST API yet, but there is a spec proposing to do that now: > > https://review.openstack.org/#/c/357884/ > > This is an admin-only API so we're trying to keep an end user point of view out of discussing it. For example, the disk details don't have any unique identifier. We could add one, but would it be useful to an admin? > > This API is really supposed to be for debug, but the question I have for this list is does anyone actually use the existing os-diagnostics API? And if so, how do you use it, and what information is most useful? If you are using it, please review the spec and provide any input on what's proposed for outputs. > Matt, Thanks for asking. We?ve used the API in the past as a way of getting the usage data out of Nova. We had problems running ceilometer at scale and this was a way of retrieving the data for our accounting reports. We created a special policy configuration to allow authorised users query this data without full admin rights. From the look of the new spec, it would be fairly straightforward to adapt the process to use the new format as all the CPU utilisation data is there. Tim > -- > > Thanks, > > Matt Riedemann > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators From joe at topjian.net Wed Oct 12 14:44:14 2016 From: joe at topjian.net (Joe Topjian) Date: Wed, 12 Oct 2016 08:44:14 -0600 Subject: [Openstack-operators] [nova] Does anyone use the os-diagnostics API? In-Reply-To: <248C8965-1ECE-4CA3-9B88-A7C75CF8B3AD@cern.ch> References: <5dae5c89-b682-15c7-11c6-d9a5481076a4@linux.vnet.ibm.com> <248C8965-1ECE-4CA3-9B88-A7C75CF8B3AD@cern.ch> Message-ID: Hi Matt, Tim, Thanks for asking. We?ve used the API in the past as a way of getting the > usage data out of Nova. We had problems running ceilometer at scale and > this was a way of retrieving the data for our accounting reports. We > created a special policy configuration to allow authorised users query this > data without full admin rights. > We do this as well. > From the look of the new spec, it would be fairly straightforward to adapt > the process to use the new format as all the CPU utilisation data is there. > I agree. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaypipes at gmail.com Wed Oct 12 17:02:11 2016 From: jaypipes at gmail.com (Jay Pipes) Date: Wed, 12 Oct 2016 13:02:11 -0400 Subject: [Openstack-operators] OPNFV delivered its new Colorado release In-Reply-To: <884BFBB6F562F44F91BF83F77C24E9972E7EB5F0@lhreml507-mbx> References: <884BFBB6F562F44F91BF83F77C24E9972E7EB5F0@lhreml507-mbx> Message-ID: On 10/12/2016 10:17 AM, Ulrich Kleber wrote: > Hi, > > I didn?t see an official announcement, so I like to point you to the new > release of OPNFV. > > https://www.opnfv.org/news-faq/press-release/2016/09/open-source-nfv-project-delivers-third-platform-release-introduces-0 > > OPNFV is an open source project and one of the most important users of > OpenStack in the Telecom/NFV area. Maybe it is interesting for your work. Hi Ulrich, I'm hoping you can explain to me what exactly OPNFV is producing in its releases. I've been through a number of the Jira items linked in the press release above and simply cannot tell what is being actually delivered by OPNFV versus what is just something that is in an OpenStack component or deployment. A good example of this is the IPV6 project's Jira item here: https://jira.opnfv.org/browse/IPVSIX-37 Which has the title of "Auto-installation of both underlay IPv6 and overlay IPv6". The issue is marked as "Fixed" in Colorado 1.0. However, I can't tell what code was produced in OPNFV that delivers the auto-installation of both an underlay IPv6 and an overlay IPv6. In short, I'm confused about what OPNFV is producing and hope to get some insights from you. Best, -jay From serverascode at gmail.com Wed Oct 12 17:21:27 2016 From: serverascode at gmail.com (Curtis) Date: Wed, 12 Oct 2016 11:21:27 -0600 Subject: [Openstack-operators] glance, nova backed by NFS Message-ID: Hi All, I've never used NFS with OpenStack before. But I am now with a small lab deployment with a few compute nodes. Is there anything special I should do with NFS and glance and nova? I remember there was an issue way back when of images being deleted b/c certain components weren't aware they are on NFS. I'm guessing that has changed but just wanted to check if there is anything specific I should be doing configuration-wise. I can't seem to find many examples of NFS usage...so feel free to point me to any documentation, blog posts, etc. I may have just missed it. Thanks, Curtis. From klindgren at godaddy.com Wed Oct 12 17:58:47 2016 From: klindgren at godaddy.com (Kris G. Lindgren) Date: Wed, 12 Oct 2016 17:58:47 +0000 Subject: [Openstack-operators] glance, nova backed by NFS In-Reply-To: References: Message-ID: <94226DBF-0D6F-4585-9341-37E193C5F0E6@godaddy.com> We don?t use shared storage at all. But I do remember what you are talking about. The issue is that compute nodes weren?t aware they were on shared storage, and would nuke the backing mage from shared storage, after all vm?s on *that* compute node had stopped using it. Not after all vm?s had stopped using it. https://bugs.launchpad.net/nova/+bug/1620341 - Looks like some code to address that concern has landed but only in trunk maybe mitaka. Any stable releases don?t appear to be shared backing image safe. You might be able to get around this by setting the compute image manager task to not run. But the issue with that will be one missed compute node, and everyone will have a bad day. ___________________________________________________________________ Kris Lindgren Senior Linux Systems Engineer GoDaddy On 10/12/16, 11:21 AM, "Curtis" wrote: Hi All, I've never used NFS with OpenStack before. But I am now with a small lab deployment with a few compute nodes. Is there anything special I should do with NFS and glance and nova? I remember there was an issue way back when of images being deleted b/c certain components weren't aware they are on NFS. I'm guessing that has changed but just wanted to check if there is anything specific I should be doing configuration-wise. I can't seem to find many examples of NFS usage...so feel free to point me to any documentation, blog posts, etc. I may have just missed it. Thanks, Curtis. _______________________________________________ OpenStack-operators mailing list OpenStack-operators at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators From Tobias.Schon at fiberdata.se Wed Oct 12 17:59:13 2016 From: Tobias.Schon at fiberdata.se (=?iso-8859-1?Q?Tobias_Sch=F6n?=) Date: Wed, 12 Oct 2016 17:59:13 +0000 Subject: [Openstack-operators] glance, nova backed by NFS In-Reply-To: References: Message-ID: <58f74b23f1254f5886df90183092a32b@elara.ad.fiberdata.se> Hi, We have an environment with glance and cinder using NFS. It's important that they have the correct rights. The shares should be owned by nova on compute if mounted up on /var/lib/nova/instances And the same for nova and glance on the controller.. It's important that you map the glance and nova up in fstab. The cinder one is controlled by the nfsdriver. We are running rhelosp6, Openstack Juno. This parameter is used: nfs_shares_config=/etc/cinder/shares-nfs.conf in the /etc/cinder/cinder.conf file and then we have specified the share in /etc/cinder/shares-nfs.conf. chmod 0640 /etc/cinder/shares-nfs.conf setsebool -P virt_use_nfs on This one is important to make it work with SELinux How up to date this is actually I don't know tbh, but it was up to date as of redhat documentation when we deployed it around 1.5y ago. //Tobias -----Ursprungligt meddelande----- Fr?n: Curtis [mailto:serverascode at gmail.com] Skickat: den 12 oktober 2016 19:21 Till: openstack-operators at lists.openstack.org ?mne: [Openstack-operators] glance, nova backed by NFS Hi All, I've never used NFS with OpenStack before. But I am now with a small lab deployment with a few compute nodes. Is there anything special I should do with NFS and glance and nova? I remember there was an issue way back when of images being deleted b/c certain components weren't aware they are on NFS. I'm guessing that has changed but just wanted to check if there is anything specific I should be doing configuration-wise. I can't seem to find many examples of NFS usage...so feel free to point me to any documentation, blog posts, etc. I may have just missed it. Thanks, Curtis. _______________________________________________ OpenStack-operators mailing list OpenStack-operators at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators From klindgren at godaddy.com Wed Oct 12 18:06:31 2016 From: klindgren at godaddy.com (Kris G. Lindgren) Date: Wed, 12 Oct 2016 18:06:31 +0000 Subject: [Openstack-operators] glance, nova backed by NFS In-Reply-To: <58f74b23f1254f5886df90183092a32b@elara.ad.fiberdata.se> References: <58f74b23f1254f5886df90183092a32b@elara.ad.fiberdata.se> Message-ID: <88AE471E-81CD-4E72-935D-4390C05F5D33@godaddy.com> Tobias does bring up something that we have ran into before. With NFSv3 user mapping is done by ID, so you need to ensure that all of your servers use the same UID for nova/glance. If you are using packages/automation that do useradd?s without the same userid its *VERY* easy to have mismatched username/uid?s across multiple boxes. NFSv4, iirc, sends the username and the nfs server does the translation of the name to uid, so it should not have this issue. But we have been bit by that more than once on nfsv3. ___________________________________________________________________ Kris Lindgren Senior Linux Systems Engineer GoDaddy On 10/12/16, 11:59 AM, "Tobias Sch?n" wrote: Hi, We have an environment with glance and cinder using NFS. It's important that they have the correct rights. The shares should be owned by nova on compute if mounted up on /var/lib/nova/instances And the same for nova and glance on the controller.. It's important that you map the glance and nova up in fstab. The cinder one is controlled by the nfsdriver. We are running rhelosp6, Openstack Juno. This parameter is used: nfs_shares_config=/etc/cinder/shares-nfs.conf in the /etc/cinder/cinder.conf file and then we have specified the share in /etc/cinder/shares-nfs.conf. chmod 0640 /etc/cinder/shares-nfs.conf setsebool -P virt_use_nfs on This one is important to make it work with SELinux How up to date this is actually I don't know tbh, but it was up to date as of redhat documentation when we deployed it around 1.5y ago. //Tobias -----Ursprungligt meddelande----- Fr?n: Curtis [mailto:serverascode at gmail.com] Skickat: den 12 oktober 2016 19:21 Till: openstack-operators at lists.openstack.org ?mne: [Openstack-operators] glance, nova backed by NFS Hi All, I've never used NFS with OpenStack before. But I am now with a small lab deployment with a few compute nodes. Is there anything special I should do with NFS and glance and nova? I remember there was an issue way back when of images being deleted b/c certain components weren't aware they are on NFS. I'm guessing that has changed but just wanted to check if there is anything specific I should be doing configuration-wise. I can't seem to find many examples of NFS usage...so feel free to point me to any documentation, blog posts, etc. I may have just missed it. Thanks, Curtis. _______________________________________________ OpenStack-operators mailing list OpenStack-operators at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators _______________________________________________ OpenStack-operators mailing list OpenStack-operators at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators From serverascode at gmail.com Wed Oct 12 18:18:40 2016 From: serverascode at gmail.com (Curtis) Date: Wed, 12 Oct 2016 12:18:40 -0600 Subject: [Openstack-operators] glance, nova backed by NFS In-Reply-To: <94226DBF-0D6F-4585-9341-37E193C5F0E6@godaddy.com> References: <94226DBF-0D6F-4585-9341-37E193C5F0E6@godaddy.com> Message-ID: On Wed, Oct 12, 2016 at 11:58 AM, Kris G. Lindgren wrote: > We don?t use shared storage at all. But I do remember what you are talking about. The issue is that compute nodes weren?t aware they were on shared storage, and would nuke the backing mage from shared storage, after all vm?s on *that* compute node had stopped using it. Not after all vm?s had stopped using it. > > https://bugs.launchpad.net/nova/+bug/1620341 - Looks like some code to address that concern has landed but only in trunk maybe mitaka. Any stable releases don?t appear to be shared backing image safe. > > You might be able to get around this by setting the compute image manager task to not run. But the issue with that will be one missed compute node, and everyone will have a bad day. Cool, thanks Kris. Exactly what I was talking about. I'm on Mitaka, and I will look into that bugfix. I guess I need to test this lol. Thanks, Curtis. > > ___________________________________________________________________ > Kris Lindgren > Senior Linux Systems Engineer > GoDaddy > > On 10/12/16, 11:21 AM, "Curtis" wrote: > > Hi All, > > I've never used NFS with OpenStack before. But I am now with a small > lab deployment with a few compute nodes. > > Is there anything special I should do with NFS and glance and nova? I > remember there was an issue way back when of images being deleted b/c > certain components weren't aware they are on NFS. I'm guessing that > has changed but just wanted to check if there is anything specific I > should be doing configuration-wise. > > I can't seem to find many examples of NFS usage...so feel free to > point me to any documentation, blog posts, etc. I may have just missed > it. > > Thanks, > Curtis. > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > -- Blog: serverascode.com From jpenick at gmail.com Wed Oct 12 18:34:39 2016 From: jpenick at gmail.com (James Penick) Date: Wed, 12 Oct 2016 11:34:39 -0700 Subject: [Openstack-operators] glance, nova backed by NFS In-Reply-To: References: <94226DBF-0D6F-4585-9341-37E193C5F0E6@godaddy.com> Message-ID: Are you backing both glance and nova-compute with NFS? If you're only putting the glance store on NFS you don't need any special changes. It'll Just Work. On Wed, Oct 12, 2016 at 11:18 AM, Curtis wrote: > On Wed, Oct 12, 2016 at 11:58 AM, Kris G. Lindgren > wrote: > > We don?t use shared storage at all. But I do remember what you are > talking about. The issue is that compute nodes weren?t aware they were on > shared storage, and would nuke the backing mage from shared storage, after > all vm?s on *that* compute node had stopped using it. Not after all vm?s > had stopped using it. > > > > https://bugs.launchpad.net/nova/+bug/1620341 - Looks like some code to > address that concern has landed but only in trunk maybe mitaka. Any > stable releases don?t appear to be shared backing image safe. > > > > You might be able to get around this by setting the compute image > manager task to not run. But the issue with that will be one missed > compute node, and everyone will have a bad day. > > Cool, thanks Kris. Exactly what I was talking about. I'm on Mitaka, > and I will look into that bugfix. I guess I need to test this lol. > > Thanks, > Curtis. > > > > > ___________________________________________________________________ > > Kris Lindgren > > Senior Linux Systems Engineer > > GoDaddy > > > > On 10/12/16, 11:21 AM, "Curtis" wrote: > > > > Hi All, > > > > I've never used NFS with OpenStack before. But I am now with a small > > lab deployment with a few compute nodes. > > > > Is there anything special I should do with NFS and glance and nova? I > > remember there was an issue way back when of images being deleted b/c > > certain components weren't aware they are on NFS. I'm guessing that > > has changed but just wanted to check if there is anything specific I > > should be doing configuration-wise. > > > > I can't seem to find many examples of NFS usage...so feel free to > > point me to any documentation, blog posts, etc. I may have just > missed > > it. > > > > Thanks, > > Curtis. > > > > _______________________________________________ > > OpenStack-operators mailing list > > OpenStack-operators at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/ > openstack-operators > > > > > > > > -- > Blog: serverascode.com > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -------------- next part -------------- An HTML attachment was scrubbed... URL: From serverascode at gmail.com Wed Oct 12 18:49:40 2016 From: serverascode at gmail.com (Curtis) Date: Wed, 12 Oct 2016 12:49:40 -0600 Subject: [Openstack-operators] glance, nova backed by NFS In-Reply-To: References: <94226DBF-0D6F-4585-9341-37E193C5F0E6@godaddy.com> Message-ID: On Wed, Oct 12, 2016 at 12:34 PM, James Penick wrote: > Are you backing both glance and nova-compute with NFS? If you're only > putting the glance store on NFS you don't need any special changes. It'll > Just Work. I've got both glance and nova backed by NFS. Haven't put up cinder yet, but that will also be NFS backed. I just have very limited storage on the compute hosts, basically just enough for the operating system; this is just a small but permanent lab deployment. Good to hear that Glance will Just Work. :) Thanks! Thanks, Curtis. > > On Wed, Oct 12, 2016 at 11:18 AM, Curtis wrote: >> >> On Wed, Oct 12, 2016 at 11:58 AM, Kris G. Lindgren >> wrote: >> > We don?t use shared storage at all. But I do remember what you are >> > talking about. The issue is that compute nodes weren?t aware they were on >> > shared storage, and would nuke the backing mage from shared storage, after >> > all vm?s on *that* compute node had stopped using it. Not after all vm?s had >> > stopped using it. >> > >> > https://bugs.launchpad.net/nova/+bug/1620341 - Looks like some code to >> > address that concern has landed but only in trunk maybe mitaka. Any stable >> > releases don?t appear to be shared backing image safe. >> > >> > You might be able to get around this by setting the compute image >> > manager task to not run. But the issue with that will be one missed compute >> > node, and everyone will have a bad day. >> >> Cool, thanks Kris. Exactly what I was talking about. I'm on Mitaka, >> and I will look into that bugfix. I guess I need to test this lol. >> >> Thanks, >> Curtis. >> >> > >> > ___________________________________________________________________ >> > Kris Lindgren >> > Senior Linux Systems Engineer >> > GoDaddy >> > >> > On 10/12/16, 11:21 AM, "Curtis" wrote: >> > >> > Hi All, >> > >> > I've never used NFS with OpenStack before. But I am now with a small >> > lab deployment with a few compute nodes. >> > >> > Is there anything special I should do with NFS and glance and nova? >> > I >> > remember there was an issue way back when of images being deleted >> > b/c >> > certain components weren't aware they are on NFS. I'm guessing that >> > has changed but just wanted to check if there is anything specific I >> > should be doing configuration-wise. >> > >> > I can't seem to find many examples of NFS usage...so feel free to >> > point me to any documentation, blog posts, etc. I may have just >> > missed >> > it. >> > >> > Thanks, >> > Curtis. >> > >> > _______________________________________________ >> > OpenStack-operators mailing list >> > OpenStack-operators at lists.openstack.org >> > >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> > >> > >> >> >> >> -- >> Blog: serverascode.com >> >> _______________________________________________ >> OpenStack-operators mailing list >> OpenStack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > -- Blog: serverascode.com From xavpaice at gmail.com Wed Oct 12 19:24:16 2016 From: xavpaice at gmail.com (Xav Paice) Date: Thu, 13 Oct 2016 08:24:16 +1300 Subject: [Openstack-operators] Ubuntu package for Octavia In-Reply-To: References: Message-ID: I highly recommend looking in to Giftwrap for that, until there's UCA packages. The thing missing from the packages that Giftwrap will produce is init scripts, config file examples, and the various user and directory setup stuff. That's easy enough to put into config management or a separate package if you wanted to. On 13 October 2016 at 01:25, Lutz Birkhahn wrote: > Has anyone seen Ubuntu packages for Octavia yet? > > We?re running Ubuntu 16.04 with Newton, but for whatever reason I can not > find any Octavia package? > > So far I?ve only found in https://wiki.openstack.org/ > wiki/Neutron/LBaaS/HowToRun the following: > > Ubuntu Packages Setup: Install octavia with your favorite > distribution: ?pip install octavia? > > That was not exactly what we would like to do in our production cloud? > > Thanks, > > /lutz > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From warren at wangspeed.com Wed Oct 12 20:02:55 2016 From: warren at wangspeed.com (Warren Wang) Date: Wed, 12 Oct 2016 16:02:55 -0400 Subject: [Openstack-operators] [openstack-operators][ceph][nova] How do you handle Nova on Ceph? In-Reply-To: <29c531e1c0614dc1bd1cf587d69aa45b@corp.ovh.com> References: <85c30a1d875a437d9394e70ab5f4058a@corp.ovh.com> <77C16534-E8D7-4657-8691-FC02D91CF5D4@gmail.com> <29c531e1c0614dc1bd1cf587d69aa45b@corp.ovh.com> Message-ID: If fault domain is a concern, you can always split the cloud up into 3 regions, each having a dedicate Ceph cluster. It isn't necessarily going to mean more hardware, just logical splits. This is kind of assuming that the network doesn't share the same fault domain though. Alternatively, you can split the hardware for the Ceph boxes into multiple clusters, and use multi backend Cinder to talk to the same set of hypervisors to use multiple Ceph clusters. We're doing that to migrate from one Ceph cluster to another. You can even mount a volume from each cluster into a single instance. Keep in mind that you don't really want to shrink a Ceph cluster too much. What's "too big"? You should keep growing so that the fault domains aren't too small (3 physical rack min), or you guarantee that the entire cluster stops if you lose network. Just my 2 cents, Warren On Wed, Oct 12, 2016 at 8:35 AM, Adam Kijak wrote: > > _______________________________________ > > From: Abel Lopez > > Sent: Monday, October 10, 2016 9:57 PM > > To: Adam Kijak > > Cc: openstack-operators > > Subject: Re: [Openstack-operators] [openstack-operators][ceph][nova] > How do you handle Nova on Ceph? > > > > Have you thought about dedicated pools for cinder/nova and a separate > pool for glance, and any other uses you might have? > > You need to setup secrets on kvm, but you can have cinder creating > volumes from glance images quickly in different pools > > We already have separate pool for images, volumes and instances. > Separate pools doesn't really split the failure domain though. > Also AFAIK you can't set up multiple pools for instances in nova.conf, > right? > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -------------- next part -------------- An HTML attachment was scrubbed... URL: From clint at fewbar.com Wed Oct 12 20:46:01 2016 From: clint at fewbar.com (Clint Byrum) Date: Wed, 12 Oct 2016 13:46:01 -0700 Subject: [Openstack-operators] [openstack-operators][ceph][nova] How do you handle Nova on Ceph? In-Reply-To: <839b3aba73394bf9aae56c801687e50c@corp.ovh.com> References: <85c30a1d875a437d9394e70ab5f4058a@corp.ovh.com> <1476124873.6209.3.camel@gmail.com> <839b3aba73394bf9aae56c801687e50c@corp.ovh.com> Message-ID: <1476304977-sup-4753@fewbar.com> Excerpts from Adam Kijak's message of 2016-10-12 12:23:41 +0000: > > ________________________________________ > > From: Xav Paice > > Sent: Monday, October 10, 2016 8:41 PM > > To: openstack-operators at lists.openstack.org > > Subject: Re: [Openstack-operators] [openstack-operators][ceph][nova] How do you handle Nova on Ceph? > > > > On Mon, 2016-10-10 at 13:29 +0000, Adam Kijak wrote: > > > Hello, > > > > > > We use a Ceph cluster for Nova (Glance and Cinder as well) and over > > > time, > > > more and more data is stored there. We can't keep the cluster so big > > > because of > > > Ceph's limitations. Sooner or later it needs to be closed for adding > > > new > > > instances, images and volumes. Not to mention it's a big failure > > > domain. > > > > I'm really keen to hear more about those limitations. > > Basically it's all related to the failure domain ("blast radius") and risk management. > Bigger Ceph cluster means more users. Are these risks well documented? Since Ceph is specifically designed _not_ to have the kind of large blast radius that one might see with say, a centralized SAN, I'm curious to hear what events trigger cluster-wide blasts. > Growing the Ceph cluster temporary slows it down, so many users will be affected. One might say that a Ceph cluster that can't be grown without the users noticing is an over-subscribed Ceph cluster. My understanding is that one is always advised to provision a certain amount of cluster capacity for growing and replicating to replaced drives. > There are bugs in Ceph which can cause data corruption. It's rare, but when it happens > it can affect many (maybe all) users of the Ceph cluster. > :( From blair.bethwaite at gmail.com Thu Oct 13 02:37:58 2016 From: blair.bethwaite at gmail.com (Blair Bethwaite) Date: Thu, 13 Oct 2016 13:37:58 +1100 Subject: [Openstack-operators] Disable console for an instance In-Reply-To: References: Message-ID: Hi all, Does anyone know whether there is a way to disable the novnc console on a per instance basis? Cheers, Blair -------------- next part -------------- An HTML attachment was scrubbed... URL: From tomi.juvonen at nokia.com Thu Oct 13 06:12:59 2016 From: tomi.juvonen at nokia.com (Juvonen, Tomi (Nokia - FI/Espoo)) Date: Thu, 13 Oct 2016 06:12:59 +0000 Subject: [Openstack-operators] host maintenance Message-ID: Hi, Had the session in Austin summit for the maintenance: https://etherpad.openstack.org/p/AUS-ops-Nova-maint Now the discussion have gotten to a point that should start prototyping a service hosting the maintenance. For maintenance Nova could have a link to this new service, but no functionality for the maintenance should be placed in Nova project. Was working to have this, but now looking better to have the prototype first: https://review.openstack.org/310510/ >From the discussion over above review, the new service might have maintenance API connection point that links to host by utilizing "hostid" used in Nova and then there should be "tenant _id" specific end point to get what needed by project. Something like: http://maintenancethingy/maintenance/{hostid} http://maintenancethingy/maintenance/{hostid}/{tenant_id} This will ensure tenant will not know details about host, but can get needed information about maintenance effecting to his instances. In Telco/NFV side we have OPNFV Doctor project that sets the requirements for this from that direction. I am personally interested in that part, but to have this to serve all operator requirements, it is best to bring this here. This could be further discussed in Barcelona and should get other people interested to help starting with this. Any suggestion for the Ops session? Looking forward, Tomi -------------- next part -------------- An HTML attachment was scrubbed... URL: From tom at openstack.org Thu Oct 13 07:19:51 2016 From: tom at openstack.org (Tom Fifield) Date: Thu, 13 Oct 2016 15:19:51 +0800 Subject: [Openstack-operators] Ops@Barcelona - Call for Moderators Message-ID: Hello all, The Ops design summit sessions are now listed on the schedule! https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=Ops+Summit%3A Please tick them and set up your summit app :) We are still looking for moderators for the following sessions: * OpenStack on Containers * Containers on OpenStack * Migration to OpenStack * Fleet Management * Feedback to PWG * Neutron pain points * Config Mgmt * HAProy, MySQL, Rabbit Tuning * Swift * Horizon * OpenStack CLI * Baremetal Deploy * OsOps * CI/CD workflows * Alt Deployment tech * ControlPlane Design(multi region) * Docs ==> If you are interested in moderating a session, please * write your name in its etherpad (multiple moderators OK!) ==> I'll be honest, I have no idea what some of the sessions are supposed to be, so also: * write a short description for the session so the agenda can be updated For those of you who want to know what it takes check out the Moderator's Guide: https://wiki.openstack.org/wiki/Operations/Meetups#Moderators_Guide & ask questions - we're here to help! Regards, Tom, on behalf of the Ops Meetups Team https://wiki.openstack.org/wiki/Ops_Meetups_Team From liuyulong.xa at gmail.com Thu Oct 13 10:41:59 2016 From: liuyulong.xa at gmail.com (liu yulong) Date: Thu, 13 Oct 2016 18:41:59 +0800 Subject: [Openstack-operators] [nova-ceph-pools] one nova az with multiple ceph rbd pool Message-ID: Hi all, We are now facing a nova operation issue about setting different ceph rbd pool to each corresponding nova compute node in one available zone. For instance: (1) compute-node-1 in az1 and set images_rbd_pool=pool1 (2) compute-node-2 in az1 and set images_rbd_pool=pool2 This setting can normally work fine. But problem encountered when doing resize instance. We try to resize a instance-1 originally in compute-node-1, then nova will do schedule procedure, assuming that nova-scheduler get the chosen compute node is compute-node-2. Then the nova will get the following error: http://paste.openstack.org/show/585540/. This exception is because that in compute-node-2 nova can't find pool1 vm1 disk. So is there a way nova can handle this? Similar thing in cinder, you may see a cinder volume has host attribute like: host_name at pool_name#ceph. Why we use such setting is because that while doing storage capacity expansion we want to avoid the influence of ceph rebalance. One solution I found is AggregateInstanceExtraSpecsFilter, this can coordinate working with Host Aggregates metadata and flavor metadata. We try to create Host Aggregates like: az1-pool1 with hosts compute-node-1 and metadata {ceph_pool: pool1}; az1-pool2 with hosts compute-node-2 and metadata {ceph_pool: pool2}; and create flavors like: flavor1-pool1 with metadata {ceph_pool: pool1}; flavor2-pool1 with metadata {ceph_pool: pool1}; flavor1-pool2 with metadata {ceph_pool: pool2}; flavor2-pool2 with metadata {ceph_pool: pool2}; But this may introduce a new issue about the create_instance. Which flavor should be used during nova boot? The business/application layer seems need to add it's own flavor scheduler. So here finally, I want to ask, if there is a best practice about using multiple ceph rbd pools in one available zone. Best regards, LIU Yulong -------------- next part -------------- An HTML attachment was scrubbed... URL: From jjj8593 at gmail.com Thu Oct 13 11:00:32 2016 From: jjj8593 at gmail.com (LIU Yulong) Date: Thu, 13 Oct 2016 19:00:32 +0800 Subject: [Openstack-operators] [nova-ceph-pools] one nova az with multiple ceph rbd pool Message-ID: Hi all, We are now facing a nova operation issue about setting different ceph rbd pool to each corresponding nova compute node in one available zone. For instance: (1) compute-node-1 in az1 and set images_rbd_pool=pool1 (2) compute-node-2 in az1 and set images_rbd_pool=pool2 This setting can normally work fine. But problem encountered when doing resize instance. We try to resize a instance-1 originally in compute-node-1, then nova will do schedule procedure, assuming that nova-scheduler get the chosen compute node is compute-node-2. Then the nova will get the following error: http://paste.openstack.org/show/585540/. This exception is because that in compute-node-2 nova can't find pool1 vm1 disk. So is there a way nova can handle this? Similar thing in cinder, you may see a cinder volume has host attribute like: host_name at pool_name#ceph. Why we use such setting is because that while doing storage capacity expansion we want to avoid the influence of ceph rebalance. One solution I found is AggregateInstanceExtraSpecsFilter, this can coordinate working with Host Aggregates metadata and flavor metadata. We try to create Host Aggregates like: az1-pool1 with hosts compute-node-1, and metadata {ceph_pool: pool1}; az1-pool2 with hosts compute-node-2, and metadata {ceph_pool: pool2}; and create flavors like: flavor1-pool1 with metadata {ceph_pool: pool1}; flavor2-pool1 with metadata {ceph_pool: pool1}; flavor1-pool2 with metadata {ceph_pool: pool2}; flavor2-pool2 with metadata {ceph_pool: pool2}; But this may introduce a new issue about the create_instance. Which flavor should be used? The business/application layer seems need to add it's own flavor scheduler. So here finally, I want to ask, if there is a best practice about using multiple ceph rbd pools in one available zone. Best regards, LIU Yulong -------------- next part -------------- An HTML attachment was scrubbed... URL: From serverascode at gmail.com Thu Oct 13 13:51:32 2016 From: serverascode at gmail.com (Curtis) Date: Thu, 13 Oct 2016 07:51:32 -0600 Subject: [Openstack-operators] OPNFV delivered its new Colorado release In-Reply-To: <884BFBB6F562F44F91BF83F77C24E9972E7EB5F0@lhreml507-mbx> References: <884BFBB6F562F44F91BF83F77C24E9972E7EB5F0@lhreml507-mbx> Message-ID: Thanks Uli! Based on the NFV session we had at the recent NYC ops meetup, I think that some openstack operators aren't sure what NFV is, and maybe don't know about the OPNFV project. That said, NFV is hard to define because it's a broad term. One thing I can say for sure is that many of the requirements that the telecom industry has for running workloads on openstack clouds are, in general, similar to requirements many openstack operators have, from performance, to networking, to monitoring, to managing uptime of the "data plane" (ie. virtual machines), etc. Ok, so maybe not everyone needs to run an occasional instance on real time KVM, but otherwise, IMHO, quite similar. A quick read through the OPNFV project list [1] might show some interesting work that could be useful to any openstack operator. :) Thanks, Curtis. [1]: https://wiki.opnfv.org/display/PROJ/Full+Project+List On Wed, Oct 12, 2016 at 8:17 AM, Ulrich Kleber wrote: > Hi, > > I didn?t see an official announcement, so I like to point you to the new > release of OPNFV. > > https://www.opnfv.org/news-faq/press-release/2016/09/open-so > urce-nfv-project-delivers-third-platform-release-introduces-0 > > OPNFV is an open source project and one of the most important users of > OpenStack in the Telecom/NFV area. Maybe it is interesting for your work. > > Feel free to contact me or meet during the Barcelona summit at the session > of the OpenStack Operators Telecom/NFV Functional Team ( > https://www.openstack.org/summit/barcelona-2016/summit-sche > dule/events/16768/openstack-operators-telecomnfv-functional-team). > > Cheers, > > Uli > > > > > > *Ulrich KLEBER * > > Chief Architect Cloud Platform > > European Research Center > > IT R&D Division > > [image: huawei_logo] > > Riesstra?e 25 > > 80992 M?nchen > > Mobile: +49 (0)173 4636144 > > Mobile (China): +86 13005480404 > > > > > > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > -- Blog: serverascode.com -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 6737 bytes Desc: not available URL: From pieter.kruithof.jr at intel.com Thu Oct 13 13:59:33 2016 From: pieter.kruithof.jr at intel.com (Kruithof Jr, Pieter) Date: Thu, 13 Oct 2016 13:59:33 +0000 Subject: [Openstack-operators] Participate in a Usability Study at Barcelona: Feel good and get a free bluetooth speaker Message-ID: <28BAEFDC-0124-4796-BA4C-41D7B883E065@intel.com> Apologies for any cross-postings. Operators, For those attending the Barcelona Summit, there will be two usability studies being conducted with cloud operators. We had nearly a 100% show rate last summit and the results had a direct impact on the OpenStackClient. In fact, the results were shared at the OSC working session the next day. Intel is also providing a bluetooth speaker to show our appreciation for your time. ___ The first study will be on Monday, October 24th and is intended to investigate the current APIs to understand any specific pain points associated with completing tasks that span projects such as quotas. This study will last 45 minutes per operator. You can schedule a time here: http://doodle.com/poll/fwfi2sfcuctxv3u8 Note that you may need to set the time zone in Doodle to Spain > Ceuta ___ The second study will be on Tuesday, October 25th and is intended to investigate the OpenStackClient to understand any specific pain points and opportunities associated with completing tasks with the client. This study will last 45 minutes per operator. We ran a similar study at the previous summit and the feedback from users was that it was a good opportunity to ?test drive? the client. You can schedule a time here: http://doodle.com/poll/894aqsmheaa2mv5a Note that you may need to set the time zone in Doodle to Spain > Ceuta For both studies, someone from the OpenStack UX project will send you a calendar invite after you select a time(s) that are convenient for you. Thanks, Piet Kruithof PTL OpenStack UX From patricia.dugan at oneops.com Thu Oct 13 14:33:34 2016 From: patricia.dugan at oneops.com (Patricia Dugan) Date: Thu, 13 Oct 2016 07:33:34 -0700 Subject: [Openstack-operators] the url for moderator request In-Reply-To: References: Message-ID: <594C6030-2A95-4D3A-B52E-CBDAE1A5F0F0@oneops.com> Is this the url for the moderator request: https://etherpad.openstack.org/p/BCN-ops-meetup << at the bottom, is that where we are supposed to fill out? > On Oct 13, 2016, at 12:19 AM, openstack-operators-request at lists.openstack.org wrote: > > Send OpenStack-operators mailing list submissions to > openstack-operators at lists.openstack.org > > To subscribe or unsubscribe via the World Wide Web, visit > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > or, via email, send a message with subject or body 'help' to > openstack-operators-request at lists.openstack.org > > You can reach the person managing the list at > openstack-operators-owner at lists.openstack.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of OpenStack-operators digest..." > > > Today's Topics: > > 1. Re: [openstack-operators][ceph][nova] How do you handle Nova > on Ceph? (Adam Kijak) > 2. Ubuntu package for Octavia (Lutz Birkhahn) > 3. Re: [openstack-operators][ceph][nova] How do you handle Nova > on Ceph? (Adam Kijak) > 4. [nova] Does anyone use the os-diagnostics API? (Matt Riedemann) > 5. OPNFV delivered its new Colorado release (Ulrich Kleber) > 6. Re: [nova] Does anyone use the os-diagnostics API? (Tim Bell) > 7. Re: [nova] Does anyone use the os-diagnostics API? (Joe Topjian) > 8. Re: OPNFV delivered its new Colorado release (Jay Pipes) > 9. glance, nova backed by NFS (Curtis) > 10. Re: glance, nova backed by NFS (Kris G. Lindgren) > 11. Re: glance, nova backed by NFS (Tobias Sch?n) > 12. Re: glance, nova backed by NFS (Kris G. Lindgren) > 13. Re: glance, nova backed by NFS (Curtis) > 14. Re: glance, nova backed by NFS (James Penick) > 15. Re: glance, nova backed by NFS (Curtis) > 16. Re: Ubuntu package for Octavia (Xav Paice) > 17. Re: [openstack-operators][ceph][nova] How do you handle Nova > on Ceph? (Warren Wang) > 18. Re: [openstack-operators][ceph][nova] How do you handle Nova > on Ceph? (Clint Byrum) > 19. Disable console for an instance (Blair Bethwaite) > 20. host maintenance (Juvonen, Tomi (Nokia - FI/Espoo)) > 21. Ops at Barcelona - Call for Moderators (Tom Fifield) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Wed, 12 Oct 2016 12:23:41 +0000 > From: Adam Kijak > To: Xav Paice , > "openstack-operators at lists.openstack.org" > > Subject: Re: [Openstack-operators] [openstack-operators][ceph][nova] > How do you handle Nova on Ceph? > Message-ID: <839b3aba73394bf9aae56c801687e50c at corp.ovh.com> > Content-Type: text/plain; charset="iso-8859-1" > >> ________________________________________ >> From: Xav Paice >> Sent: Monday, October 10, 2016 8:41 PM >> To: openstack-operators at lists.openstack.org >> Subject: Re: [Openstack-operators] [openstack-operators][ceph][nova] How do you handle Nova on Ceph? >> >> On Mon, 2016-10-10 at 13:29 +0000, Adam Kijak wrote: >>> Hello, >>> >>> We use a Ceph cluster for Nova (Glance and Cinder as well) and over >>> time, >>> more and more data is stored there. We can't keep the cluster so big >>> because of >>> Ceph's limitations. Sooner or later it needs to be closed for adding >>> new >>> instances, images and volumes. Not to mention it's a big failure >>> domain. >> >> I'm really keen to hear more about those limitations. > > Basically it's all related to the failure domain ("blast radius") and risk management. > Bigger Ceph cluster means more users. > Growing the Ceph cluster temporary slows it down, so many users will be affected. > There are bugs in Ceph which can cause data corruption. It's rare, but when it happens > it can affect many (maybe all) users of the Ceph cluster. > >>> >>> How do you handle this issue? >>> What is your strategy to divide Ceph clusters between compute nodes? >>> How do you solve VM snapshot placement and migration issues then >>> (snapshots will be left on older Ceph)? >> >> Having played with Ceph and compute on the same hosts, I'm a big fan of >> separating them and having dedicated Ceph hosts, and dedicated compute >> hosts. That allows me a lot more flexibility with hardware >> configuration and maintenance, easier troubleshooting for resource >> contention, and also allows scaling at different rates. > > Exactly, I consider it the best practice as well. > > > > > ------------------------------ > > Message: 2 > Date: Wed, 12 Oct 2016 12:25:38 +0000 > From: Lutz Birkhahn > To: "openstack-operators at lists.openstack.org" > > Subject: [Openstack-operators] Ubuntu package for Octavia > Message-ID: > Content-Type: text/plain; charset="utf-8" > > Has anyone seen Ubuntu packages for Octavia yet? > > We?re running Ubuntu 16.04 with Newton, but for whatever reason I can not find any Octavia package? > > So far I?ve only found in https://wiki.openstack.org/wiki/Neutron/LBaaS/HowToRun the following: > > Ubuntu Packages Setup: Install octavia with your favorite distribution: ?pip install octavia? > > That was not exactly what we would like to do in our production cloud? > > Thanks, > > /lutz > -------------- next part -------------- > A non-text attachment was scrubbed... > Name: smime.p7s > Type: application/x-pkcs7-signature > Size: 6404 bytes > Desc: not available > URL: > > ------------------------------ > > Message: 3 > Date: Wed, 12 Oct 2016 12:35:48 +0000 > From: Adam Kijak > To: Abel Lopez > Cc: openstack-operators > Subject: Re: [Openstack-operators] [openstack-operators][ceph][nova] > How do you handle Nova on Ceph? > Message-ID: <29c531e1c0614dc1bd1cf587d69aa45b at corp.ovh.com> > Content-Type: text/plain; charset="iso-8859-1" > >> _______________________________________ >> From: Abel Lopez >> Sent: Monday, October 10, 2016 9:57 PM >> To: Adam Kijak >> Cc: openstack-operators >> Subject: Re: [Openstack-operators] [openstack-operators][ceph][nova] How do you handle Nova on Ceph? >> >> Have you thought about dedicated pools for cinder/nova and a separate pool for glance, and any other uses you might have? >> You need to setup secrets on kvm, but you can have cinder creating volumes from glance images quickly in different pools > > We already have separate pool for images, volumes and instances. > Separate pools doesn't really split the failure domain though. > Also AFAIK you can't set up multiple pools for instances in nova.conf, right? > > > > ------------------------------ > > Message: 4 > Date: Wed, 12 Oct 2016 09:00:11 -0500 > From: Matt Riedemann > To: "openstack-operators at lists.openstack.org" > > Subject: [Openstack-operators] [nova] Does anyone use the > os-diagnostics API? > Message-ID: <5dae5c89-b682-15c7-11c6-d9a5481076a4 at linux.vnet.ibm.com> > Content-Type: text/plain; charset=utf-8; format=flowed > > The current form of the nova os-diagnostics API is hypervisor-specific, > which makes it pretty unusable in any generic way, which is why Tempest > doesn't test it. > > Way back when the v3 API was a thing for 2 minutes there was work done > to standardize the diagnostics information across virt drivers in nova. > The only thing is we haven't exposed that out of the REST API yet, but > there is a spec proposing to do that now: > > https://review.openstack.org/#/c/357884/ > > This is an admin-only API so we're trying to keep an end user point of > view out of discussing it. For example, the disk details don't have any > unique identifier. We could add one, but would it be useful to an admin? > > This API is really supposed to be for debug, but the question I have for > this list is does anyone actually use the existing os-diagnostics API? > And if so, how do you use it, and what information is most useful? If > you are using it, please review the spec and provide any input on what's > proposed for outputs. > > -- > > Thanks, > > Matt Riedemann > > > > > ------------------------------ > > Message: 5 > Date: Wed, 12 Oct 2016 14:17:50 +0000 > From: Ulrich Kleber > To: "openstack-operators at lists.openstack.org" > > Subject: [Openstack-operators] OPNFV delivered its new Colorado > release > Message-ID: <884BFBB6F562F44F91BF83F77C24E9972E7EB5F0 at lhreml507-mbx> > Content-Type: text/plain; charset="iso-8859-1" > > Hi, > I didn't see an official announcement, so I like to point you to the new release of OPNFV. > https://www.opnfv.org/news-faq/press-release/2016/09/open-source-nfv-project-delivers-third-platform-release-introduces-0 > OPNFV is an open source project and one of the most important users of OpenStack in the Telecom/NFV area. Maybe it is interesting for your work. > Feel free to contact me or meet during the Barcelona summit at the session of the OpenStack Operators Telecom/NFV Functional Team (https://www.openstack.org/summit/barcelona-2016/summit-schedule/events/16768/openstack-operators-telecomnfv-functional-team). > Cheers, > Uli > > > Ulrich KLEBER > Chief Architect Cloud Platform > European Research Center > IT R&D Division > [huawei_logo] > Riesstra?e 25 > 80992 M?nchen > Mobile: +49 (0)173 4636144 > Mobile (China): +86 13005480404 > > > > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > -------------- next part -------------- > A non-text attachment was scrubbed... > Name: image001.jpg > Type: image/jpeg > Size: 6737 bytes > Desc: image001.jpg > URL: > > ------------------------------ > > Message: 6 > Date: Wed, 12 Oct 2016 14:35:54 +0000 > From: Tim Bell > To: Matt Riedemann > Cc: "openstack-operators at lists.openstack.org" > > Subject: Re: [Openstack-operators] [nova] Does anyone use the > os-diagnostics API? > Message-ID: <248C8965-1ECE-4CA3-9B88-A7C75CF8B3AD at cern.ch> > Content-Type: text/plain; charset="utf-8" > > >> On 12 Oct 2016, at 07:00, Matt Riedemann wrote: >> >> The current form of the nova os-diagnostics API is hypervisor-specific, which makes it pretty unusable in any generic way, which is why Tempest doesn't test it. >> >> Way back when the v3 API was a thing for 2 minutes there was work done to standardize the diagnostics information across virt drivers in nova. The only thing is we haven't exposed that out of the REST API yet, but there is a spec proposing to do that now: >> >> https://review.openstack.org/#/c/357884/ >> >> This is an admin-only API so we're trying to keep an end user point of view out of discussing it. For example, the disk details don't have any unique identifier. We could add one, but would it be useful to an admin? >> >> This API is really supposed to be for debug, but the question I have for this list is does anyone actually use the existing os-diagnostics API? And if so, how do you use it, and what information is most useful? If you are using it, please review the spec and provide any input on what's proposed for outputs. >> > > Matt, > > Thanks for asking. We?ve used the API in the past as a way of getting the usage data out of Nova. We had problems running ceilometer at scale and this was a way of retrieving the data for our accounting reports. We created a special policy configuration to allow authorised users query this data without full admin rights. > > From the look of the new spec, it would be fairly straightforward to adapt the process to use the new format as all the CPU utilisation data is there. > > Tim > >> -- >> >> Thanks, >> >> Matt Riedemann >> >> >> _______________________________________________ >> OpenStack-operators mailing list >> OpenStack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > ------------------------------ > > Message: 7 > Date: Wed, 12 Oct 2016 08:44:14 -0600 > From: Joe Topjian > To: "openstack-operators at lists.openstack.org" > > Subject: Re: [Openstack-operators] [nova] Does anyone use the > os-diagnostics API? > Message-ID: > > Content-Type: text/plain; charset="utf-8" > > Hi Matt, Tim, > > Thanks for asking. We?ve used the API in the past as a way of getting the >> usage data out of Nova. We had problems running ceilometer at scale and >> this was a way of retrieving the data for our accounting reports. We >> created a special policy configuration to allow authorised users query this >> data without full admin rights. >> > > We do this as well. > > >> From the look of the new spec, it would be fairly straightforward to adapt >> the process to use the new format as all the CPU utilisation data is there. >> > > I agree. > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > > ------------------------------ > > Message: 8 > Date: Wed, 12 Oct 2016 13:02:11 -0400 > From: Jay Pipes > To: openstack-operators at lists.openstack.org > Subject: Re: [Openstack-operators] OPNFV delivered its new Colorado > release > Message-ID: > Content-Type: text/plain; charset=windows-1252; format=flowed > > On 10/12/2016 10:17 AM, Ulrich Kleber wrote: >> Hi, >> >> I didn?t see an official announcement, so I like to point you to the new >> release of OPNFV. >> >> https://www.opnfv.org/news-faq/press-release/2016/09/open-source-nfv-project-delivers-third-platform-release-introduces-0 >> >> OPNFV is an open source project and one of the most important users of >> OpenStack in the Telecom/NFV area. Maybe it is interesting for your work. > > Hi Ulrich, > > I'm hoping you can explain to me what exactly OPNFV is producing in its > releases. I've been through a number of the Jira items linked in the > press release above and simply cannot tell what is being actually > delivered by OPNFV versus what is just something that is in an OpenStack > component or deployment. > > A good example of this is the IPV6 project's Jira item here: > > https://jira.opnfv.org/browse/IPVSIX-37 > > Which has the title of "Auto-installation of both underlay IPv6 and > overlay IPv6". The issue is marked as "Fixed" in Colorado 1.0. However, > I can't tell what code was produced in OPNFV that delivers the > auto-installation of both an underlay IPv6 and an overlay IPv6. > > In short, I'm confused about what OPNFV is producing and hope to get > some insights from you. > > Best, > -jay > > > > ------------------------------ > > Message: 9 > Date: Wed, 12 Oct 2016 11:21:27 -0600 > From: Curtis > To: "openstack-operators at lists.openstack.org" > > Subject: [Openstack-operators] glance, nova backed by NFS > Message-ID: > > Content-Type: text/plain; charset=UTF-8 > > Hi All, > > I've never used NFS with OpenStack before. But I am now with a small > lab deployment with a few compute nodes. > > Is there anything special I should do with NFS and glance and nova? I > remember there was an issue way back when of images being deleted b/c > certain components weren't aware they are on NFS. I'm guessing that > has changed but just wanted to check if there is anything specific I > should be doing configuration-wise. > > I can't seem to find many examples of NFS usage...so feel free to > point me to any documentation, blog posts, etc. I may have just missed > it. > > Thanks, > Curtis. > > > > ------------------------------ > > Message: 10 > Date: Wed, 12 Oct 2016 17:58:47 +0000 > From: "Kris G. Lindgren" > To: Curtis , > "openstack-operators at lists.openstack.org" > > Subject: Re: [Openstack-operators] glance, nova backed by NFS > Message-ID: <94226DBF-0D6F-4585-9341-37E193C5F0E6 at godaddy.com> > Content-Type: text/plain; charset="utf-8" > > We don?t use shared storage at all. But I do remember what you are talking about. The issue is that compute nodes weren?t aware they were on shared storage, and would nuke the backing mage from shared storage, after all vm?s on *that* compute node had stopped using it. Not after all vm?s had stopped using it. > > https://bugs.launchpad.net/nova/+bug/1620341 - Looks like some code to address that concern has landed but only in trunk maybe mitaka. Any stable releases don?t appear to be shared backing image safe. > > You might be able to get around this by setting the compute image manager task to not run. But the issue with that will be one missed compute node, and everyone will have a bad day. > > ___________________________________________________________________ > Kris Lindgren > Senior Linux Systems Engineer > GoDaddy > > On 10/12/16, 11:21 AM, "Curtis" wrote: > > Hi All, > > I've never used NFS with OpenStack before. But I am now with a small > lab deployment with a few compute nodes. > > Is there anything special I should do with NFS and glance and nova? I > remember there was an issue way back when of images being deleted b/c > certain components weren't aware they are on NFS. I'm guessing that > has changed but just wanted to check if there is anything specific I > should be doing configuration-wise. > > I can't seem to find many examples of NFS usage...so feel free to > point me to any documentation, blog posts, etc. I may have just missed > it. > > Thanks, > Curtis. > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > > ------------------------------ > > Message: 11 > Date: Wed, 12 Oct 2016 17:59:13 +0000 > From: Tobias Sch?n > To: Curtis , > "openstack-operators at lists.openstack.org" > > Subject: Re: [Openstack-operators] glance, nova backed by NFS > Message-ID: <58f74b23f1254f5886df90183092a32b at elara.ad.fiberdata.se> > Content-Type: text/plain; charset="iso-8859-1" > > Hi, > > We have an environment with glance and cinder using NFS. > It's important that they have the correct rights. The shares should be owned by nova on compute if mounted up on /var/lib/nova/instances > And the same for nova and glance on the controller.. > > It's important that you map the glance and nova up in fstab. > > The cinder one is controlled by the nfsdriver. > > We are running rhelosp6, Openstack Juno. > > This parameter is used: > nfs_shares_config=/etc/cinder/shares-nfs.conf in the /etc/cinder/cinder.conf file and then we have specified the share in /etc/cinder/shares-nfs.conf. > > chmod 0640 /etc/cinder/shares-nfs.conf > > setsebool -P virt_use_nfs on > This one is important to make it work with SELinux > > How up to date this is actually I don't know tbh, but it was up to date as of redhat documentation when we deployed it around 1.5y ago. > > //Tobias > > -----Ursprungligt meddelande----- > Fr?n: Curtis [mailto:serverascode at gmail.com] > Skickat: den 12 oktober 2016 19:21 > Till: openstack-operators at lists.openstack.org > ?mne: [Openstack-operators] glance, nova backed by NFS > > Hi All, > > I've never used NFS with OpenStack before. But I am now with a small lab deployment with a few compute nodes. > > Is there anything special I should do with NFS and glance and nova? I remember there was an issue way back when of images being deleted b/c certain components weren't aware they are on NFS. I'm guessing that has changed but just wanted to check if there is anything specific I should be doing configuration-wise. > > I can't seem to find many examples of NFS usage...so feel free to point me to any documentation, blog posts, etc. I may have just missed it. > > Thanks, > Curtis. > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > > ------------------------------ > > Message: 12 > Date: Wed, 12 Oct 2016 18:06:31 +0000 > From: "Kris G. Lindgren" > To: Tobias Sch?n , Curtis > , "openstack-operators at lists.openstack.org" > > Subject: Re: [Openstack-operators] glance, nova backed by NFS > Message-ID: <88AE471E-81CD-4E72-935D-4390C05F5D33 at godaddy.com> > Content-Type: text/plain; charset="utf-8" > > Tobias does bring up something that we have ran into before. > > With NFSv3 user mapping is done by ID, so you need to ensure that all of your servers use the same UID for nova/glance. If you are using packages/automation that do useradd?s without the same userid its *VERY* easy to have mismatched username/uid?s across multiple boxes. > > NFSv4, iirc, sends the username and the nfs server does the translation of the name to uid, so it should not have this issue. But we have been bit by that more than once on nfsv3. > > > ___________________________________________________________________ > Kris Lindgren > Senior Linux Systems Engineer > GoDaddy > > On 10/12/16, 11:59 AM, "Tobias Sch?n" wrote: > > Hi, > > We have an environment with glance and cinder using NFS. > It's important that they have the correct rights. The shares should be owned by nova on compute if mounted up on /var/lib/nova/instances > And the same for nova and glance on the controller.. > > It's important that you map the glance and nova up in fstab. > > The cinder one is controlled by the nfsdriver. > > We are running rhelosp6, Openstack Juno. > > This parameter is used: > nfs_shares_config=/etc/cinder/shares-nfs.conf in the /etc/cinder/cinder.conf file and then we have specified the share in /etc/cinder/shares-nfs.conf. > > chmod 0640 /etc/cinder/shares-nfs.conf > > setsebool -P virt_use_nfs on > This one is important to make it work with SELinux > > How up to date this is actually I don't know tbh, but it was up to date as of redhat documentation when we deployed it around 1.5y ago. > > //Tobias > > -----Ursprungligt meddelande----- > Fr?n: Curtis [mailto:serverascode at gmail.com] > Skickat: den 12 oktober 2016 19:21 > Till: openstack-operators at lists.openstack.org > ?mne: [Openstack-operators] glance, nova backed by NFS > > Hi All, > > I've never used NFS with OpenStack before. But I am now with a small lab deployment with a few compute nodes. > > Is there anything special I should do with NFS and glance and nova? I remember there was an issue way back when of images being deleted b/c certain components weren't aware they are on NFS. I'm guessing that has changed but just wanted to check if there is anything specific I should be doing configuration-wise. > > I can't seem to find many examples of NFS usage...so feel free to point me to any documentation, blog posts, etc. I may have just missed it. > > Thanks, > Curtis. > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > > ------------------------------ > > Message: 13 > Date: Wed, 12 Oct 2016 12:18:40 -0600 > From: Curtis > To: "Kris G. Lindgren" > Cc: "openstack-operators at lists.openstack.org" > > Subject: Re: [Openstack-operators] glance, nova backed by NFS > Message-ID: > > Content-Type: text/plain; charset=UTF-8 > > On Wed, Oct 12, 2016 at 11:58 AM, Kris G. Lindgren > wrote: >> We don?t use shared storage at all. But I do remember what you are talking about. The issue is that compute nodes weren?t aware they were on shared storage, and would nuke the backing mage from shared storage, after all vm?s on *that* compute node had stopped using it. Not after all vm?s had stopped using it. >> >> https://bugs.launchpad.net/nova/+bug/1620341 - Looks like some code to address that concern has landed but only in trunk maybe mitaka. Any stable releases don?t appear to be shared backing image safe. >> >> You might be able to get around this by setting the compute image manager task to not run. But the issue with that will be one missed compute node, and everyone will have a bad day. > > Cool, thanks Kris. Exactly what I was talking about. I'm on Mitaka, > and I will look into that bugfix. I guess I need to test this lol. > > Thanks, > Curtis. > >> >> ___________________________________________________________________ >> Kris Lindgren >> Senior Linux Systems Engineer >> GoDaddy >> >> On 10/12/16, 11:21 AM, "Curtis" wrote: >> >> Hi All, >> >> I've never used NFS with OpenStack before. But I am now with a small >> lab deployment with a few compute nodes. >> >> Is there anything special I should do with NFS and glance and nova? I >> remember there was an issue way back when of images being deleted b/c >> certain components weren't aware they are on NFS. I'm guessing that >> has changed but just wanted to check if there is anything specific I >> should be doing configuration-wise. >> >> I can't seem to find many examples of NFS usage...so feel free to >> point me to any documentation, blog posts, etc. I may have just missed >> it. >> >> Thanks, >> Curtis. >> >> _______________________________________________ >> OpenStack-operators mailing list >> OpenStack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> >> > > > > -- > Blog: serverascode.com > > > > ------------------------------ > > Message: 14 > Date: Wed, 12 Oct 2016 11:34:39 -0700 > From: James Penick > To: Curtis > Cc: "openstack-operators at lists.openstack.org" > > Subject: Re: [Openstack-operators] glance, nova backed by NFS > Message-ID: > > Content-Type: text/plain; charset="utf-8" > > Are you backing both glance and nova-compute with NFS? If you're only > putting the glance store on NFS you don't need any special changes. It'll > Just Work. > > On Wed, Oct 12, 2016 at 11:18 AM, Curtis wrote: > >> On Wed, Oct 12, 2016 at 11:58 AM, Kris G. Lindgren >> wrote: >>> We don?t use shared storage at all. But I do remember what you are >> talking about. The issue is that compute nodes weren?t aware they were on >> shared storage, and would nuke the backing mage from shared storage, after >> all vm?s on *that* compute node had stopped using it. Not after all vm?s >> had stopped using it. >>> >>> https://bugs.launchpad.net/nova/+bug/1620341 - Looks like some code to >> address that concern has landed but only in trunk maybe mitaka. Any >> stable releases don?t appear to be shared backing image safe. >>> >>> You might be able to get around this by setting the compute image >> manager task to not run. But the issue with that will be one missed >> compute node, and everyone will have a bad day. >> >> Cool, thanks Kris. Exactly what I was talking about. I'm on Mitaka, >> and I will look into that bugfix. I guess I need to test this lol. >> >> Thanks, >> Curtis. >> >>> >>> ___________________________________________________________________ >>> Kris Lindgren >>> Senior Linux Systems Engineer >>> GoDaddy >>> >>> On 10/12/16, 11:21 AM, "Curtis" wrote: >>> >>> Hi All, >>> >>> I've never used NFS with OpenStack before. But I am now with a small >>> lab deployment with a few compute nodes. >>> >>> Is there anything special I should do with NFS and glance and nova? I >>> remember there was an issue way back when of images being deleted b/c >>> certain components weren't aware they are on NFS. I'm guessing that >>> has changed but just wanted to check if there is anything specific I >>> should be doing configuration-wise. >>> >>> I can't seem to find many examples of NFS usage...so feel free to >>> point me to any documentation, blog posts, etc. I may have just >> missed >>> it. >>> >>> Thanks, >>> Curtis. >>> >>> _______________________________________________ >>> OpenStack-operators mailing list >>> OpenStack-operators at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/ >> openstack-operators >>> >>> >> >> >> >> -- >> Blog: serverascode.com >> >> _______________________________________________ >> OpenStack-operators mailing list >> OpenStack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > > ------------------------------ > > Message: 15 > Date: Wed, 12 Oct 2016 12:49:40 -0600 > From: Curtis > To: James Penick > Cc: "openstack-operators at lists.openstack.org" > > Subject: Re: [Openstack-operators] glance, nova backed by NFS > Message-ID: > > Content-Type: text/plain; charset=UTF-8 > > On Wed, Oct 12, 2016 at 12:34 PM, James Penick wrote: >> Are you backing both glance and nova-compute with NFS? If you're only >> putting the glance store on NFS you don't need any special changes. It'll >> Just Work. > > I've got both glance and nova backed by NFS. Haven't put up cinder > yet, but that will also be NFS backed. I just have very limited > storage on the compute hosts, basically just enough for the operating > system; this is just a small but permanent lab deployment. Good to > hear that Glance will Just Work. :) Thanks! > > Thanks, > Curtis. > >> >> On Wed, Oct 12, 2016 at 11:18 AM, Curtis wrote: >>> >>> On Wed, Oct 12, 2016 at 11:58 AM, Kris G. Lindgren >>> wrote: >>>> We don?t use shared storage at all. But I do remember what you are >>>> talking about. The issue is that compute nodes weren?t aware they were on >>>> shared storage, and would nuke the backing mage from shared storage, after >>>> all vm?s on *that* compute node had stopped using it. Not after all vm?s had >>>> stopped using it. >>>> >>>> https://bugs.launchpad.net/nova/+bug/1620341 - Looks like some code to >>>> address that concern has landed but only in trunk maybe mitaka. Any stable >>>> releases don?t appear to be shared backing image safe. >>>> >>>> You might be able to get around this by setting the compute image >>>> manager task to not run. But the issue with that will be one missed compute >>>> node, and everyone will have a bad day. >>> >>> Cool, thanks Kris. Exactly what I was talking about. I'm on Mitaka, >>> and I will look into that bugfix. I guess I need to test this lol. >>> >>> Thanks, >>> Curtis. >>> >>>> >>>> ___________________________________________________________________ >>>> Kris Lindgren >>>> Senior Linux Systems Engineer >>>> GoDaddy >>>> >>>> On 10/12/16, 11:21 AM, "Curtis" wrote: >>>> >>>> Hi All, >>>> >>>> I've never used NFS with OpenStack before. But I am now with a small >>>> lab deployment with a few compute nodes. >>>> >>>> Is there anything special I should do with NFS and glance and nova? >>>> I >>>> remember there was an issue way back when of images being deleted >>>> b/c >>>> certain components weren't aware they are on NFS. I'm guessing that >>>> has changed but just wanted to check if there is anything specific I >>>> should be doing configuration-wise. >>>> >>>> I can't seem to find many examples of NFS usage...so feel free to >>>> point me to any documentation, blog posts, etc. I may have just >>>> missed >>>> it. >>>> >>>> Thanks, >>>> Curtis. >>>> >>>> _______________________________________________ >>>> OpenStack-operators mailing list >>>> OpenStack-operators at lists.openstack.org >>>> >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >>>> >>>> >>> >>> >>> >>> -- >>> Blog: serverascode.com >>> >>> _______________________________________________ >>> OpenStack-operators mailing list >>> OpenStack-operators at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> >> > > > > -- > Blog: serverascode.com > > > > ------------------------------ > > Message: 16 > Date: Thu, 13 Oct 2016 08:24:16 +1300 > From: Xav Paice > To: Lutz Birkhahn > Cc: "openstack-operators at lists.openstack.org" > > Subject: Re: [Openstack-operators] Ubuntu package for Octavia > Message-ID: > > Content-Type: text/plain; charset="utf-8" > > I highly recommend looking in to Giftwrap for that, until there's UCA > packages. > > The thing missing from the packages that Giftwrap will produce is init > scripts, config file examples, and the various user and directory setup > stuff. That's easy enough to put into config management or a separate > package if you wanted to. > > On 13 October 2016 at 01:25, Lutz Birkhahn wrote: > >> Has anyone seen Ubuntu packages for Octavia yet? >> >> We?re running Ubuntu 16.04 with Newton, but for whatever reason I can not >> find any Octavia package? >> >> So far I?ve only found in https://wiki.openstack.org/ >> wiki/Neutron/LBaaS/HowToRun the following: >> >> Ubuntu Packages Setup: Install octavia with your favorite >> distribution: ?pip install octavia? >> >> That was not exactly what we would like to do in our production cloud? >> >> Thanks, >> >> /lutz >> _______________________________________________ >> OpenStack-operators mailing list >> OpenStack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> >> > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > > ------------------------------ > > Message: 17 > Date: Wed, 12 Oct 2016 16:02:55 -0400 > From: Warren Wang > To: Adam Kijak > Cc: openstack-operators > Subject: Re: [Openstack-operators] [openstack-operators][ceph][nova] > How do you handle Nova on Ceph? > Message-ID: > > Content-Type: text/plain; charset="utf-8" > > If fault domain is a concern, you can always split the cloud up into 3 > regions, each having a dedicate Ceph cluster. It isn't necessarily going to > mean more hardware, just logical splits. This is kind of assuming that the > network doesn't share the same fault domain though. > > Alternatively, you can split the hardware for the Ceph boxes into multiple > clusters, and use multi backend Cinder to talk to the same set of > hypervisors to use multiple Ceph clusters. We're doing that to migrate from > one Ceph cluster to another. You can even mount a volume from each cluster > into a single instance. > > Keep in mind that you don't really want to shrink a Ceph cluster too much. > What's "too big"? You should keep growing so that the fault domains aren't > too small (3 physical rack min), or you guarantee that the entire cluster > stops if you lose network. > > Just my 2 cents, > Warren > > On Wed, Oct 12, 2016 at 8:35 AM, Adam Kijak wrote: > >>> _______________________________________ >>> From: Abel Lopez >>> Sent: Monday, October 10, 2016 9:57 PM >>> To: Adam Kijak >>> Cc: openstack-operators >>> Subject: Re: [Openstack-operators] [openstack-operators][ceph][nova] >> How do you handle Nova on Ceph? >>> >>> Have you thought about dedicated pools for cinder/nova and a separate >> pool for glance, and any other uses you might have? >>> You need to setup secrets on kvm, but you can have cinder creating >> volumes from glance images quickly in different pools >> >> We already have separate pool for images, volumes and instances. >> Separate pools doesn't really split the failure domain though. >> Also AFAIK you can't set up multiple pools for instances in nova.conf, >> right? >> >> _______________________________________________ >> OpenStack-operators mailing list >> OpenStack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > > ------------------------------ > > Message: 18 > Date: Wed, 12 Oct 2016 13:46:01 -0700 > From: Clint Byrum > To: openstack-operators > Subject: Re: [Openstack-operators] [openstack-operators][ceph][nova] > How do you handle Nova on Ceph? > Message-ID: <1476304977-sup-4753 at fewbar.com> > Content-Type: text/plain; charset=UTF-8 > > Excerpts from Adam Kijak's message of 2016-10-12 12:23:41 +0000: >>> ________________________________________ >>> From: Xav Paice >>> Sent: Monday, October 10, 2016 8:41 PM >>> To: openstack-operators at lists.openstack.org >>> Subject: Re: [Openstack-operators] [openstack-operators][ceph][nova] How do you handle Nova on Ceph? >>> >>> On Mon, 2016-10-10 at 13:29 +0000, Adam Kijak wrote: >>>> Hello, >>>> >>>> We use a Ceph cluster for Nova (Glance and Cinder as well) and over >>>> time, >>>> more and more data is stored there. We can't keep the cluster so big >>>> because of >>>> Ceph's limitations. Sooner or later it needs to be closed for adding >>>> new >>>> instances, images and volumes. Not to mention it's a big failure >>>> domain. >>> >>> I'm really keen to hear more about those limitations. >> >> Basically it's all related to the failure domain ("blast radius") and risk management. >> Bigger Ceph cluster means more users. > > Are these risks well documented? Since Ceph is specifically designed > _not_ to have the kind of large blast radius that one might see with > say, a centralized SAN, I'm curious to hear what events trigger > cluster-wide blasts. > >> Growing the Ceph cluster temporary slows it down, so many users will be affected. > > One might say that a Ceph cluster that can't be grown without the users > noticing is an over-subscribed Ceph cluster. My understanding is that > one is always advised to provision a certain amount of cluster capacity > for growing and replicating to replaced drives. > >> There are bugs in Ceph which can cause data corruption. It's rare, but when it happens >> it can affect many (maybe all) users of the Ceph cluster. >> > > :( > > > > ------------------------------ > > Message: 19 > Date: Thu, 13 Oct 2016 13:37:58 +1100 > From: Blair Bethwaite > To: "openstack-oper." > Subject: [Openstack-operators] Disable console for an instance > Message-ID: > > Content-Type: text/plain; charset="utf-8" > > Hi all, > > Does anyone know whether there is a way to disable the novnc console on a > per instance basis? > > Cheers, > Blair > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > > ------------------------------ > > Message: 20 > Date: Thu, 13 Oct 2016 06:12:59 +0000 > From: "Juvonen, Tomi (Nokia - FI/Espoo)" > To: "OpenStack-operators at lists.openstack.org" > > Subject: [Openstack-operators] host maintenance > Message-ID: > > > Content-Type: text/plain; charset="us-ascii" > > Hi, > > Had the session in Austin summit for the maintenance: > https://etherpad.openstack.org/p/AUS-ops-Nova-maint > > Now the discussion have gotten to a point that should start prototyping a service hosting the maintenance. For maintenance Nova could have a link to this new service, but no functionality for the maintenance should be placed in Nova project. Was working to have this, but now looking better to have the prototype first: > https://review.openstack.org/310510/ > >> From the discussion over above review, the new service might have maintenance API connection point that links to host by utilizing "hostid" used in Nova and then there should be "tenant _id" specific end point to get what needed by project. Something like: > http://maintenancethingy/maintenance/{hostid} > http://maintenancethingy/maintenance/{hostid}/{tenant_id} > This will ensure tenant will not know details about host, but can get needed information about maintenance effecting to his instances. > > In Telco/NFV side we have OPNFV Doctor project that sets the requirements for this from that direction. I am personally interested in that part, but to have this to serve all operator requirements, it is best to bring this here. > > This could be further discussed in Barcelona and should get other people interested to help starting with this. Any suggestion for the Ops session? > > Looking forward, > Tomi > > > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > > ------------------------------ > > Message: 21 > Date: Thu, 13 Oct 2016 15:19:51 +0800 > From: Tom Fifield > To: OpenStack Operators > Subject: [Openstack-operators] Ops at Barcelona - Call for Moderators > Message-ID: > Content-Type: text/plain; charset=utf-8; format=flowed > > Hello all, > > The Ops design summit sessions are now listed on the schedule! > > > https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=Ops+Summit%3A > > Please tick them and set up your summit app :) > > > We are still looking for moderators for the following sessions: > > * OpenStack on Containers > * Containers on OpenStack > * Migration to OpenStack > * Fleet Management > * Feedback to PWG > * Neutron pain points > * Config Mgmt > * HAProy, MySQL, Rabbit Tuning > * Swift > * Horizon > * OpenStack CLI > * Baremetal Deploy > * OsOps > * CI/CD workflows > * Alt Deployment tech > * ControlPlane Design(multi region) > * Docs > > > ==> If you are interested in moderating a session, please > > * write your name in its etherpad (multiple moderators OK!) > > ==> I'll be honest, I have no idea what some of the sessions are > supposed to be, so also: > > * write a short description for the session so the agenda can be updated > > > For those of you who want to know what it takes check out the > Moderator's Guide: > https://wiki.openstack.org/wiki/Operations/Meetups#Moderators_Guide & > ask questions - we're here to help! > > > > Regards, > > > Tom, on behalf of the Ops Meetups Team > https://wiki.openstack.org/wiki/Ops_Meetups_Team > > > > ------------------------------ > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > End of OpenStack-operators Digest, Vol 72, Issue 11 > *************************************************** -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhang.peng at jp.fujitsu.com Fri Oct 14 00:01:02 2016 From: zhang.peng at jp.fujitsu.com (Zhang, Peng) Date: Fri, 14 Oct 2016 00:01:02 +0000 Subject: [Openstack-operators] [Nova][icehouse]Any way to rotating log by size Message-ID: <159656130556744984BEE6E8683CE872135CBEB9@G01JPEXMBYT04> Hi guys, Our disk of Nova controller has been filled with log files several times and it causes the node down. Although log rotation of Operation system is working fine to make rotation every hour, it is not efficient. Has anyone got any idea to rotate log files by size (e.g. 100MB) ? It costs time to add a new log achieve server, therefore a temporary solution should be considered. Best regards From: Peng, Zhang FYI: My own solution(not a good one) is shared here: Referring to the document : http://docs.openstack.org/developer/oslo.log/configfiles/example_nova.html I have found a way using python logging modules. I added a configuration file as follows under /etc/nova/ directory: File name: logging.conf [DEFAULT] logfile =/var/log/nova/api.log [loggers] keys = root [handlers] keys = rotatingfile [formatters] keys = context [logger_root] level = DEBUG handlers = rotatingfile [handler_rotatingfile] class = handlers.RotatingFileHandler args = (%(logfile)s, 'a', 5024000, 5) formatter = context [formatter_context] class = nova.openstack.common.log.ContextFormatter And I also changed this parameter in nova.conf to make nova use the above configuration: log_config_append=/etc/nova/logging.conf Everything seems going well except that all nova services such as api, scheduler, etc. begin to put their log messages into the same file(api,log)! So I have to put a script to replace the file name defined in DEFAULT session: logfile = '%s.log' % (os.path.join('/var/log', *os.path.basename(__import__('inspect').stack()[-1][1]).split('-'))) It works but it?s also weird. -------------- next part -------------- An HTML attachment was scrubbed... URL: From klindgren at godaddy.com Fri Oct 14 01:41:40 2016 From: klindgren at godaddy.com (Kris G. Lindgren) Date: Fri, 14 Oct 2016 01:41:40 +0000 Subject: [Openstack-operators] [Nova][icehouse]Any way to rotating log by size In-Reply-To: <159656130556744984BEE6E8683CE872135CBEB9@G01JPEXMBYT04> References: <159656130556744984BEE6E8683CE872135CBEB9@G01JPEXMBYT04> Message-ID: <8B400974-F827-4F36-AEDC-DE69B145D02B@godaddy.com> Add size: 100mb to your logrotate.conf It will only logrotate after the size is greater than that size. Sent from my iPad On Oct 13, 2016, at 6:03 PM, Zhang, Peng > wrote: Hi guys, Our disk of Nova controller has been filled with log files several times and it causes the node down. Although log rotation of Operation system is working fine to make rotation every hour, it is not efficient. Has anyone got any idea to rotate log files by size (e.g. 100MB) ? It costs time to add a new log achieve server, therefore a temporary solution should be considered. Best regards From: Peng, Zhang FYI: My own solution(not a good one) is shared here: Referring to the document : http://docs.openstack.org/developer/oslo.log/configfiles/example_nova.html I have found a way using python logging modules. I added a configuration file as follows under /etc/nova/ directory: File name: logging.conf [DEFAULT] logfile =/var/log/nova/api.log [loggers] keys = root [handlers] keys = rotatingfile [formatters] keys = context [logger_root] level = DEBUG handlers = rotatingfile [handler_rotatingfile] class = handlers.RotatingFileHandler args = (%(logfile)s, 'a', 5024000, 5) formatter = context [formatter_context] class = nova.openstack.common.log.ContextFormatter And I also changed this parameter in nova.conf to make nova use the above configuration: log_config_append=/etc/nova/logging.conf Everything seems going well except that all nova services such as api, scheduler, etc. begin to put their log messages into the same file(api,log)! So I have to put a script to replace the file name defined in DEFAULT session: logfile = '%s.log' % (os.path.join('/var/log', *os.path.basename(__import__('inspect').stack()[-1][1]).split('-'))) It works but it's also weird. _______________________________________________ OpenStack-operators mailing list OpenStack-operators at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhang.peng at jp.fujitsu.com Fri Oct 14 01:53:05 2016 From: zhang.peng at jp.fujitsu.com (Zhang, Peng) Date: Fri, 14 Oct 2016 01:53:05 +0000 Subject: [Openstack-operators] [Nova][icehouse]Any way to rotating log by size In-Reply-To: <8B400974-F827-4F36-AEDC-DE69B145D02B@godaddy.com> References: <159656130556744984BEE6E8683CE872135CBEB9@G01JPEXMBYT04> <8B400974-F827-4F36-AEDC-DE69B145D02B@godaddy.com> Message-ID: <159656130556744984BEE6E8683CE872135CC00F@G01JPEXMBYT04> Dear Kris: Thank you for quickly reply. >Add size: 100mb to your logrotate.conf >It will only logrotate after the size is greater than that size. Actually my logrotate.conf is just set as you mentioned. The problems is that the system logrotation works as a cron task so rotation of log file size will be made only when the task runs. I have already change the default value of cron interval from daily to hourly but the frequency is not enough. The log file size still grows to 160MB or even bigger. Maybe I should change the cron interval much smaller such as per minute. Or maybe there is better way? Best regards ------------------------original reply-------------------------------------- From: Kris G. Lindgren [mailto:klindgren at godaddy.com] Sent: Friday, October 14, 2016 10:42 AM To: Zhang, Peng/? ? Cc: openstack-operators at lists.openstack.org Subject: Re: [Openstack-operators] [Nova][icehouse]Any way to rotating log by size Add size: 100mb to your logrotate.conf It will only logrotate after the size is greater than that size. Sent from my iPad On Oct 13, 2016, at 6:03 PM, Zhang, Peng wrote: Hi guys, ? Our disk of Nova controller has been filled with log files several times and it causes the node down. Although log rotation of Operation system is working fine to make rotation every hour, it is not efficient. Has anyone got any idea to rotate log files by size (e.g. 100MB) ? ? It costs time to add a new log achieve server, therefore a temporary solution should be considered. ? Best regards From: Peng, Zhang ? FYI: My own solution(not a good one) is shared here: Referring to the document : http://docs.openstack.org/developer/oslo.log/configfiles/example_nova.html I have found a way using python logging modules. ? I added a configuration file as follows under /etc/nova/ directory: ? File name: logging.conf [DEFAULT] logfile =/var/log/nova/api.log ? [loggers] keys = root ? [handlers] keys = rotatingfile ? [formatters] keys = context ? [logger_root] level = DEBUG handlers = rotatingfile ? [handler_rotatingfile] class = handlers.RotatingFileHandler args = (%(logfile)s, 'a', 5024000, 5) formatter = context ? [formatter_context] class = nova.openstack.common.log.ContextFormatter ? And I also changed this parameter in nova.conf to make nova use the above configuration: log_config_append=/etc/nova/logging.conf ? Everything seems going well except that all nova services such as api, scheduler, etc. begin to put their log messages into the same file(api,log)! ? So I have to put a script to replace the file name defined in DEFAULT session: logfile = '%s.log' % (os.path.join('/var/log', *os.path.basename(__import__('inspect').stack()[-1][1]).split('-'))) ? It works but it?s also weird. _______________________________________________ OpenStack-operators mailing list OpenStack-operators at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators From adam.kijak at corp.ovh.com Fri Oct 14 07:57:14 2016 From: adam.kijak at corp.ovh.com (Adam Kijak) Date: Fri, 14 Oct 2016 07:57:14 +0000 Subject: [Openstack-operators] [openstack-operators][ceph][nova] How do you handle Nova on Ceph? In-Reply-To: References: <85c30a1d875a437d9394e70ab5f4058a@corp.ovh.com> <77C16534-E8D7-4657-8691-FC02D91CF5D4@gmail.com> <29c531e1c0614dc1bd1cf587d69aa45b@corp.ovh.com>, Message-ID: > From: Warren Wang > Sent: Wednesday, October 12, 2016 10:02 PM > To: Adam Kijak > Cc: Abel Lopez; openstack-operators > Subject: Re: [Openstack-operators] [openstack-operators][ceph][nova] How do you handle Nova on Ceph? > > If fault domain is a concern, you can always split the cloud up into 3 regions, each having a dedicate Ceph cluster. It isn't necessarily going to mean more hardware, just logical splits. This is kind of assuming that the network doesn't share the same fault domain though. This is not an option because having Region1-1, Region1-2, ..., Region1-10 would not be very convenient for users. > Alternatively, you can split the hardware for the Ceph boxes into multiple clusters, and use multi backend Cinder to talk to the same set of hypervisors to use multiple Ceph clusters. We're doing that to migrate from one Ceph cluster to another. You can even mount a volume from each cluster into a single instance. Multiple Ceph clusters on Cinder is not a problem, I agree. Unfortunately we use Ceph for Nova (disks of instances are on Ceph directly). > Keep in mind that you don't really want to shrink a Ceph cluster too much. What's "too big"? You should keep growing so that the fault domains aren't too small (3 physical rack min), or you guarantee that the entire cluster stops if you lose network. > Just my 2 cents, Thanks! From adam.kijak at corp.ovh.com Fri Oct 14 08:53:24 2016 From: adam.kijak at corp.ovh.com (Adam Kijak) Date: Fri, 14 Oct 2016 08:53:24 +0000 Subject: [Openstack-operators] [openstack-operators][ceph][nova] How do you handle Nova on Ceph? In-Reply-To: <1476304977-sup-4753@fewbar.com> References: <85c30a1d875a437d9394e70ab5f4058a@corp.ovh.com> <1476124873.6209.3.camel@gmail.com> <839b3aba73394bf9aae56c801687e50c@corp.ovh.com>, <1476304977-sup-4753@fewbar.com> Message-ID: <23d002718eeb4c5b97b1d607482a8b7a@corp.ovh.com> > ________________________________________ > From: Clint Byrum > Sent: Wednesday, October 12, 2016 10:46 PM > To: openstack-operators > Subject: Re: [Openstack-operators] [openstack-operators][ceph][nova] How do you handle Nova on Ceph? > > Excerpts from Adam Kijak's message of 2016-10-12 12:23:41 +0000: > > > ________________________________________ > > > From: Xav Paice > > > Sent: Monday, October 10, 2016 8:41 PM > > > To: openstack-operators at lists.openstack.org > > > Subject: Re: [Openstack-operators] [openstack-operators][ceph][nova] How do you handle Nova on Ceph? > > > > > > I'm really keen to hear more about those limitations. > > > > Basically it's all related to the failure domain ("blast radius") and risk management. > > Bigger Ceph cluster means more users. > > Are these risks well documented? Since Ceph is specifically designed > _not_ to have the kind of large blast radius that one might see with > say, a centralized SAN, I'm curious to hear what events trigger > cluster-wide blasts. In theory yes, Ceph is desgined to be fault tolerant, but from our experience it's not always like that. I think it's not well documented, but I know this case: https://www.mail-archive.com/ceph-users at lists.ceph.com/msg32804.html > > Growing the Ceph cluster temporary slows it down, so many users will be affected. > One might say that a Ceph cluster that can't be grown without the users > noticing is an over-subscribed Ceph cluster. My understanding is that > one is always advised to provision a certain amount of cluster capacity > for growing and replicating to replaced drives. I agree that provisioning a fixed size Cluster would solve some problems but planning the capacity is not always easy. Predicting the size and making it cost effective (empty big Ceph cluster costs a lot on the beginning) is quite difficult. Also adding a new Ceph cluster will be always more transparent to users than manipulating existing one especially when growing pool PGs) From blair.bethwaite at gmail.com Fri Oct 14 10:08:28 2016 From: blair.bethwaite at gmail.com (Blair Bethwaite) Date: Fri, 14 Oct 2016 21:08:28 +1100 Subject: [Openstack-operators] [openstack-operators][ceph][nova] How do you handle Nova on Ceph? In-Reply-To: <23d002718eeb4c5b97b1d607482a8b7a@corp.ovh.com> References: <85c30a1d875a437d9394e70ab5f4058a@corp.ovh.com> <1476124873.6209.3.camel@gmail.com> <839b3aba73394bf9aae56c801687e50c@corp.ovh.com> <1476304977-sup-4753@fewbar.com> <23d002718eeb4c5b97b1d607482a8b7a@corp.ovh.com> Message-ID: Hi Adam, I agree somewhat, capacity management and growth at scale is something of a pain. Ceph gives you a hugely powerful and flexible way to manage data-placement through crush but there is very little quality info about, or examples of, non-naive crushmap configurations. I think I understand what you are getting at in regards to failure-domain, e.g., a large cluster of 1000+ drives may require a single storage pool (e.g., for nova) across most/all of that storage. The chances of overlapping drive failures (overlapping meaning before recovery has completed) in multiple nodes is higher the more drives there are in the pool unless you design your crushmap to limit the size of any replica-domain (i.e., the leaf crush bucket that a single copy of an object may end up in). And in the rbd use case, if you are unlucky and even lose just a tiny fraction of objects, due to random placement there is a good chance you have lost a handful of objects from most/all rbd volumes in the cluster, which could make for many unhappy users with potentially unrecoverable filesystems in those rbds. The guys at UnitedStack did a nice presentation that touched on this a while back (http://www.slideshare.net/kioecn/build-an-highperformance-and-highdurable-block-storage-service-based-on-ceph) but I'm not sure I follow their durability model just from these slides, and if you're going to play with this you really do want a tool to calculate/simulate the impact the changes. Interesting discussion - maybe loop in ceph-users? Cheers, On 14 October 2016 at 19:53, Adam Kijak wrote: >> ________________________________________ >> From: Clint Byrum >> Sent: Wednesday, October 12, 2016 10:46 PM >> To: openstack-operators >> Subject: Re: [Openstack-operators] [openstack-operators][ceph][nova] How do you handle Nova on Ceph? >> >> Excerpts from Adam Kijak's message of 2016-10-12 12:23:41 +0000: >> > > ________________________________________ >> > > From: Xav Paice >> > > Sent: Monday, October 10, 2016 8:41 PM >> > > To: openstack-operators at lists.openstack.org >> > > Subject: Re: [Openstack-operators] [openstack-operators][ceph][nova] How do you handle Nova on Ceph? >> > > >> > > I'm really keen to hear more about those limitations. >> > >> > Basically it's all related to the failure domain ("blast radius") and risk management. >> > Bigger Ceph cluster means more users. >> >> Are these risks well documented? Since Ceph is specifically designed >> _not_ to have the kind of large blast radius that one might see with >> say, a centralized SAN, I'm curious to hear what events trigger >> cluster-wide blasts. > > In theory yes, Ceph is desgined to be fault tolerant, > but from our experience it's not always like that. > I think it's not well documented, but I know this case: > https://www.mail-archive.com/ceph-users at lists.ceph.com/msg32804.html > >> > Growing the Ceph cluster temporary slows it down, so many users will be affected. >> One might say that a Ceph cluster that can't be grown without the users >> noticing is an over-subscribed Ceph cluster. My understanding is that >> one is always advised to provision a certain amount of cluster capacity >> for growing and replicating to replaced drives. > > I agree that provisioning a fixed size Cluster would solve some problems but planning the capacity is not always easy. > Predicting the size and making it cost effective (empty big Ceph cluster costs a lot on the beginning) is quite difficult. > Also adding a new Ceph cluster will be always more transparent to users than manipulating existing one especially when growing pool PGs) > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators -- Cheers, ~Blairo From edgar.magana at workday.com Fri Oct 14 18:05:55 2016 From: edgar.magana at workday.com (Edgar Magana) Date: Fri, 14 Oct 2016 18:05:55 +0000 Subject: [Openstack-operators] the url for moderator request In-Reply-To: <594C6030-2A95-4D3A-B52E-CBDAE1A5F0F0@oneops.com> References: <594C6030-2A95-4D3A-B52E-CBDAE1A5F0F0@oneops.com> Message-ID: <73699A79-7008-4F93-BC1B-B12F0F7B6EE5@workday.com> Patricia, I think we are announcing which session we would like to moderate and add our names in the respective etherpad. Thanks, Edgar From: Patricia Dugan Date: Thursday, October 13, 2016 at 7:33 AM To: "openstack-operators at lists.openstack.org" Subject: [Openstack-operators] the url for moderator request Is this the url for the moderator request: https://etherpad.openstack.org/p/BCN-ops-meetup << at the bottom, is that where we are supposed to fill out? On Oct 13, 2016, at 12:19 AM, openstack-operators-request at lists.openstack.org wrote: Send OpenStack-operators mailing list submissions to openstack-operators at lists.openstack.org To subscribe or unsubscribe via the World Wide Web, visit http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators or, via email, send a message with subject or body 'help' to openstack-operators-request at lists.openstack.org You can reach the person managing the list at openstack-operators-owner at lists.openstack.org When replying, please edit your Subject line so it is more specific than "Re: Contents of OpenStack-operators digest..." Today's Topics: 1. Re: [openstack-operators][ceph][nova] How do you handle Nova on Ceph? (Adam Kijak) 2. Ubuntu package for Octavia (Lutz Birkhahn) 3. Re: [openstack-operators][ceph][nova] How do you handle Nova on Ceph? (Adam Kijak) 4. [nova] Does anyone use the os-diagnostics API? (Matt Riedemann) 5. OPNFV delivered its new Colorado release (Ulrich Kleber) 6. Re: [nova] Does anyone use the os-diagnostics API? (Tim Bell) 7. Re: [nova] Does anyone use the os-diagnostics API? (Joe Topjian) 8. Re: OPNFV delivered its new Colorado release (Jay Pipes) 9. glance, nova backed by NFS (Curtis) 10. Re: glance, nova backed by NFS (Kris G. Lindgren) 11. Re: glance, nova backed by NFS (Tobias Sch?n) 12. Re: glance, nova backed by NFS (Kris G. Lindgren) 13. Re: glance, nova backed by NFS (Curtis) 14. Re: glance, nova backed by NFS (James Penick) 15. Re: glance, nova backed by NFS (Curtis) 16. Re: Ubuntu package for Octavia (Xav Paice) 17. Re: [openstack-operators][ceph][nova] How do you handle Nova on Ceph? (Warren Wang) 18. Re: [openstack-operators][ceph][nova] How do you handle Nova on Ceph? (Clint Byrum) 19. Disable console for an instance (Blair Bethwaite) 20. host maintenance (Juvonen, Tomi (Nokia - FI/Espoo)) 21. Ops at Barcelona - Call for Moderators (Tom Fifield) ---------------------------------------------------------------------- Message: 1 Date: Wed, 12 Oct 2016 12:23:41 +0000 From: Adam Kijak To: Xav Paice , "openstack-operators at lists.openstack.org" Subject: Re: [Openstack-operators] [openstack-operators][ceph][nova] How do you handle Nova on Ceph? Message-ID: <839b3aba73394bf9aae56c801687e50c at corp.ovh.com> Content-Type: text/plain; charset="iso-8859-1" ________________________________________ From: Xav Paice Sent: Monday, October 10, 2016 8:41 PM To: openstack-operators at lists.openstack.org Subject: Re: [Openstack-operators] [openstack-operators][ceph][nova] How do you handle Nova on Ceph? On Mon, 2016-10-10 at 13:29 +0000, Adam Kijak wrote: Hello, We use a Ceph cluster for Nova (Glance and Cinder as well) and over time, more and more data is stored there. We can't keep the cluster so big because of Ceph's limitations. Sooner or later it needs to be closed for adding new instances, images and volumes. Not to mention it's a big failure domain. I'm really keen to hear more about those limitations. Basically it's all related to the failure domain ("blast radius") and risk management. Bigger Ceph cluster means more users. Growing the Ceph cluster temporary slows it down, so many users will be affected. There are bugs in Ceph which can cause data corruption. It's rare, but when it happens it can affect many (maybe all) users of the Ceph cluster. How do you handle this issue? What is your strategy to divide Ceph clusters between compute nodes? How do you solve VM snapshot placement and migration issues then (snapshots will be left on older Ceph)? Having played with Ceph and compute on the same hosts, I'm a big fan of separating them and having dedicated Ceph hosts, and dedicated compute hosts. That allows me a lot more flexibility with hardware configuration and maintenance, easier troubleshooting for resource contention, and also allows scaling at different rates. Exactly, I consider it the best practice as well. ------------------------------ Message: 2 Date: Wed, 12 Oct 2016 12:25:38 +0000 From: Lutz Birkhahn To: "openstack-operators at lists.openstack.org" Subject: [Openstack-operators] Ubuntu package for Octavia Message-ID: Content-Type: text/plain; charset="utf-8" Has anyone seen Ubuntu packages for Octavia yet? We?re running Ubuntu 16.04 with Newton, but for whatever reason I can not find any Octavia package? So far I?ve only found in https://wiki.openstack.org/wiki/Neutron/LBaaS/HowToRun the following: Ubuntu Packages Setup: Install octavia with your favorite distribution: ?pip install octavia? That was not exactly what we would like to do in our production cloud? Thanks, /lutz -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/x-pkcs7-signature Size: 6404 bytes Desc: not available URL: ------------------------------ Message: 3 Date: Wed, 12 Oct 2016 12:35:48 +0000 From: Adam Kijak To: Abel Lopez Cc: openstack-operators Subject: Re: [Openstack-operators] [openstack-operators][ceph][nova] How do you handle Nova on Ceph? Message-ID: <29c531e1c0614dc1bd1cf587d69aa45b at corp.ovh.com> Content-Type: text/plain; charset="iso-8859-1" _______________________________________ From: Abel Lopez Sent: Monday, October 10, 2016 9:57 PM To: Adam Kijak Cc: openstack-operators Subject: Re: [Openstack-operators] [openstack-operators][ceph][nova] How do you handle Nova on Ceph? Have you thought about dedicated pools for cinder/nova and a separate pool for glance, and any other uses you might have? You need to setup secrets on kvm, but you can have cinder creating volumes from glance images quickly in different pools We already have separate pool for images, volumes and instances. Separate pools doesn't really split the failure domain though. Also AFAIK you can't set up multiple pools for instances in nova.conf, right? ------------------------------ Message: 4 Date: Wed, 12 Oct 2016 09:00:11 -0500 From: Matt Riedemann To: "openstack-operators at lists.openstack.org" Subject: [Openstack-operators] [nova] Does anyone use the os-diagnostics API? Message-ID: <5dae5c89-b682-15c7-11c6-d9a5481076a4 at linux.vnet.ibm.com> Content-Type: text/plain; charset=utf-8; format=flowed The current form of the nova os-diagnostics API is hypervisor-specific, which makes it pretty unusable in any generic way, which is why Tempest doesn't test it. Way back when the v3 API was a thing for 2 minutes there was work done to standardize the diagnostics information across virt drivers in nova. The only thing is we haven't exposed that out of the REST API yet, but there is a spec proposing to do that now: https://review.openstack.org/#/c/357884/ This is an admin-only API so we're trying to keep an end user point of view out of discussing it. For example, the disk details don't have any unique identifier. We could add one, but would it be useful to an admin? This API is really supposed to be for debug, but the question I have for this list is does anyone actually use the existing os-diagnostics API? And if so, how do you use it, and what information is most useful? If you are using it, please review the spec and provide any input on what's proposed for outputs. -- Thanks, Matt Riedemann ------------------------------ Message: 5 Date: Wed, 12 Oct 2016 14:17:50 +0000 From: Ulrich Kleber To: "openstack-operators at lists.openstack.org" Subject: [Openstack-operators] OPNFV delivered its new Colorado release Message-ID: <884BFBB6F562F44F91BF83F77C24E9972E7EB5F0 at lhreml507-mbx> Content-Type: text/plain; charset="iso-8859-1" Hi, I didn't see an official announcement, so I like to point you to the new release of OPNFV. https://www.opnfv.org/news-faq/press-release/2016/09/open-source-nfv-project-delivers-third-platform-release-introduces-0 OPNFV is an open source project and one of the most important users of OpenStack in the Telecom/NFV area. Maybe it is interesting for your work. Feel free to contact me or meet during the Barcelona summit at the session of the OpenStack Operators Telecom/NFV Functional Team (https://www.openstack.org/summit/barcelona-2016/summit-schedule/events/16768/openstack-operators-telecomnfv-functional-team). Cheers, Uli Ulrich KLEBER Chief Architect Cloud Platform European Research Center IT R&D Division [huawei_logo] Riesstra?e 25 80992 M?nchen Mobile: +49 (0)173 4636144 Mobile (China): +86 13005480404 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 6737 bytes Desc: image001.jpg URL: ------------------------------ Message: 6 Date: Wed, 12 Oct 2016 14:35:54 +0000 From: Tim Bell To: Matt Riedemann Cc: "openstack-operators at lists.openstack.org" Subject: Re: [Openstack-operators] [nova] Does anyone use the os-diagnostics API? Message-ID: <248C8965-1ECE-4CA3-9B88-A7C75CF8B3AD at cern.ch> Content-Type: text/plain; charset="utf-8" On 12 Oct 2016, at 07:00, Matt Riedemann wrote: The current form of the nova os-diagnostics API is hypervisor-specific, which makes it pretty unusable in any generic way, which is why Tempest doesn't test it. Way back when the v3 API was a thing for 2 minutes there was work done to standardize the diagnostics information across virt drivers in nova. The only thing is we haven't exposed that out of the REST API yet, but there is a spec proposing to do that now: https://review.openstack.org/#/c/357884/ This is an admin-only API so we're trying to keep an end user point of view out of discussing it. For example, the disk details don't have any unique identifier. We could add one, but would it be useful to an admin? This API is really supposed to be for debug, but the question I have for this list is does anyone actually use the existing os-diagnostics API? And if so, how do you use it, and what information is most useful? If you are using it, please review the spec and provide any input on what's proposed for outputs. Matt, Thanks for asking. We?ve used the API in the past as a way of getting the usage data out of Nova. We had problems running ceilometer at scale and this was a way of retrieving the data for our accounting reports. We created a special policy configuration to allow authorised users query this data without full admin rights. From the look of the new spec, it would be fairly straightforward to adapt the process to use the new format as all the CPU utilisation data is there. Tim -- Thanks, Matt Riedemann _______________________________________________ OpenStack-operators mailing list OpenStack-operators at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators ------------------------------ Message: 7 Date: Wed, 12 Oct 2016 08:44:14 -0600 From: Joe Topjian To: "openstack-operators at lists.openstack.org" Subject: Re: [Openstack-operators] [nova] Does anyone use the os-diagnostics API? Message-ID: Content-Type: text/plain; charset="utf-8" Hi Matt, Tim, Thanks for asking. We?ve used the API in the past as a way of getting the usage data out of Nova. We had problems running ceilometer at scale and this was a way of retrieving the data for our accounting reports. We created a special policy configuration to allow authorised users query this data without full admin rights. We do this as well. From the look of the new spec, it would be fairly straightforward to adapt the process to use the new format as all the CPU utilisation data is there. I agree. -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 8 Date: Wed, 12 Oct 2016 13:02:11 -0400 From: Jay Pipes To: openstack-operators at lists.openstack.org Subject: Re: [Openstack-operators] OPNFV delivered its new Colorado release Message-ID: Content-Type: text/plain; charset=windows-1252; format=flowed On 10/12/2016 10:17 AM, Ulrich Kleber wrote: Hi, I didn?t see an official announcement, so I like to point you to the new release of OPNFV. https://www.opnfv.org/news-faq/press-release/2016/09/open-source-nfv-project-delivers-third-platform-release-introduces-0 OPNFV is an open source project and one of the most important users of OpenStack in the Telecom/NFV area. Maybe it is interesting for your work. Hi Ulrich, I'm hoping you can explain to me what exactly OPNFV is producing in its releases. I've been through a number of the Jira items linked in the press release above and simply cannot tell what is being actually delivered by OPNFV versus what is just something that is in an OpenStack component or deployment. A good example of this is the IPV6 project's Jira item here: https://jira.opnfv.org/browse/IPVSIX-37 Which has the title of "Auto-installation of both underlay IPv6 and overlay IPv6". The issue is marked as "Fixed" in Colorado 1.0. However, I can't tell what code was produced in OPNFV that delivers the auto-installation of both an underlay IPv6 and an overlay IPv6. In short, I'm confused about what OPNFV is producing and hope to get some insights from you. Best, -jay ------------------------------ Message: 9 Date: Wed, 12 Oct 2016 11:21:27 -0600 From: Curtis To: "openstack-operators at lists.openstack.org" Subject: [Openstack-operators] glance, nova backed by NFS Message-ID: Content-Type: text/plain; charset=UTF-8 Hi All, I've never used NFS with OpenStack before. But I am now with a small lab deployment with a few compute nodes. Is there anything special I should do with NFS and glance and nova? I remember there was an issue way back when of images being deleted b/c certain components weren't aware they are on NFS. I'm guessing that has changed but just wanted to check if there is anything specific I should be doing configuration-wise. I can't seem to find many examples of NFS usage...so feel free to point me to any documentation, blog posts, etc. I may have just missed it. Thanks, Curtis. ------------------------------ Message: 10 Date: Wed, 12 Oct 2016 17:58:47 +0000 From: "Kris G. Lindgren" To: Curtis , "openstack-operators at lists.openstack.org" Subject: Re: [Openstack-operators] glance, nova backed by NFS Message-ID: <94226DBF-0D6F-4585-9341-37E193C5F0E6 at godaddy.com> Content-Type: text/plain; charset="utf-8" We don?t use shared storage at all. But I do remember what you are talking about. The issue is that compute nodes weren?t aware they were on shared storage, and would nuke the backing mage from shared storage, after all vm?s on *that* compute node had stopped using it. Not after all vm?s had stopped using it. https://bugs.launchpad.net/nova/+bug/1620341 - Looks like some code to address that concern has landed but only in trunk maybe mitaka. Any stable releases don?t appear to be shared backing image safe. You might be able to get around this by setting the compute image manager task to not run. But the issue with that will be one missed compute node, and everyone will have a bad day. ___________________________________________________________________ Kris Lindgren Senior Linux Systems Engineer GoDaddy On 10/12/16, 11:21 AM, "Curtis" wrote: Hi All, I've never used NFS with OpenStack before. But I am now with a small lab deployment with a few compute nodes. Is there anything special I should do with NFS and glance and nova? I remember there was an issue way back when of images being deleted b/c certain components weren't aware they are on NFS. I'm guessing that has changed but just wanted to check if there is anything specific I should be doing configuration-wise. I can't seem to find many examples of NFS usage...so feel free to point me to any documentation, blog posts, etc. I may have just missed it. Thanks, Curtis. _______________________________________________ OpenStack-operators mailing list OpenStack-operators at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators ------------------------------ Message: 11 Date: Wed, 12 Oct 2016 17:59:13 +0000 From: Tobias Sch?n To: Curtis , "openstack-operators at lists.openstack.org" Subject: Re: [Openstack-operators] glance, nova backed by NFS Message-ID: <58f74b23f1254f5886df90183092a32b at elara.ad.fiberdata.se> Content-Type: text/plain; charset="iso-8859-1" Hi, We have an environment with glance and cinder using NFS. It's important that they have the correct rights. The shares should be owned by nova on compute if mounted up on /var/lib/nova/instances And the same for nova and glance on the controller.. It's important that you map the glance and nova up in fstab. The cinder one is controlled by the nfsdriver. We are running rhelosp6, Openstack Juno. This parameter is used: nfs_shares_config=/etc/cinder/shares-nfs.conf in the /etc/cinder/cinder.conf file and then we have specified the share in /etc/cinder/shares-nfs.conf. chmod 0640 /etc/cinder/shares-nfs.conf setsebool -P virt_use_nfs on This one is important to make it work with SELinux How up to date this is actually I don't know tbh, but it was up to date as of redhat documentation when we deployed it around 1.5y ago. //Tobias -----Ursprungligt meddelande----- Fr?n: Curtis [mailto:serverascode at gmail.com] Skickat: den 12 oktober 2016 19:21 Till: openstack-operators at lists.openstack.org ?mne: [Openstack-operators] glance, nova backed by NFS Hi All, I've never used NFS with OpenStack before. But I am now with a small lab deployment with a few compute nodes. Is there anything special I should do with NFS and glance and nova? I remember there was an issue way back when of images being deleted b/c certain components weren't aware they are on NFS. I'm guessing that has changed but just wanted to check if there is anything specific I should be doing configuration-wise. I can't seem to find many examples of NFS usage...so feel free to point me to any documentation, blog posts, etc. I may have just missed it. Thanks, Curtis. _______________________________________________ OpenStack-operators mailing list OpenStack-operators at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators ------------------------------ Message: 12 Date: Wed, 12 Oct 2016 18:06:31 +0000 From: "Kris G. Lindgren" To: Tobias Sch?n , Curtis , "openstack-operators at lists.openstack.org" Subject: Re: [Openstack-operators] glance, nova backed by NFS Message-ID: <88AE471E-81CD-4E72-935D-4390C05F5D33 at godaddy.com> Content-Type: text/plain; charset="utf-8" Tobias does bring up something that we have ran into before. With NFSv3 user mapping is done by ID, so you need to ensure that all of your servers use the same UID for nova/glance. If you are using packages/automation that do useradd?s without the same userid its *VERY* easy to have mismatched username/uid?s across multiple boxes. NFSv4, iirc, sends the username and the nfs server does the translation of the name to uid, so it should not have this issue. But we have been bit by that more than once on nfsv3. ___________________________________________________________________ Kris Lindgren Senior Linux Systems Engineer GoDaddy On 10/12/16, 11:59 AM, "Tobias Sch?n" wrote: Hi, We have an environment with glance and cinder using NFS. It's important that they have the correct rights. The shares should be owned by nova on compute if mounted up on /var/lib/nova/instances And the same for nova and glance on the controller.. It's important that you map the glance and nova up in fstab. The cinder one is controlled by the nfsdriver. We are running rhelosp6, Openstack Juno. This parameter is used: nfs_shares_config=/etc/cinder/shares-nfs.conf in the /etc/cinder/cinder.conf file and then we have specified the share in /etc/cinder/shares-nfs.conf. chmod 0640 /etc/cinder/shares-nfs.conf setsebool -P virt_use_nfs on This one is important to make it work with SELinux How up to date this is actually I don't know tbh, but it was up to date as of redhat documentation when we deployed it around 1.5y ago. //Tobias -----Ursprungligt meddelande----- Fr?n: Curtis [mailto:serverascode at gmail.com] Skickat: den 12 oktober 2016 19:21 Till: openstack-operators at lists.openstack.org ?mne: [Openstack-operators] glance, nova backed by NFS Hi All, I've never used NFS with OpenStack before. But I am now with a small lab deployment with a few compute nodes. Is there anything special I should do with NFS and glance and nova? I remember there was an issue way back when of images being deleted b/c certain components weren't aware they are on NFS. I'm guessing that has changed but just wanted to check if there is anything specific I should be doing configuration-wise. I can't seem to find many examples of NFS usage...so feel free to point me to any documentation, blog posts, etc. I may have just missed it. Thanks, Curtis. _______________________________________________ OpenStack-operators mailing list OpenStack-operators at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators _______________________________________________ OpenStack-operators mailing list OpenStack-operators at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators ------------------------------ Message: 13 Date: Wed, 12 Oct 2016 12:18:40 -0600 From: Curtis To: "Kris G. Lindgren" Cc: "openstack-operators at lists.openstack.org" Subject: Re: [Openstack-operators] glance, nova backed by NFS Message-ID: Content-Type: text/plain; charset=UTF-8 On Wed, Oct 12, 2016 at 11:58 AM, Kris G. Lindgren wrote: We don?t use shared storage at all. But I do remember what you are talking about. The issue is that compute nodes weren?t aware they were on shared storage, and would nuke the backing mage from shared storage, after all vm?s on *that* compute node had stopped using it. Not after all vm?s had stopped using it. https://bugs.launchpad.net/nova/+bug/1620341 - Looks like some code to address that concern has landed but only in trunk maybe mitaka. Any stable releases don?t appear to be shared backing image safe. You might be able to get around this by setting the compute image manager task to not run. But the issue with that will be one missed compute node, and everyone will have a bad day. Cool, thanks Kris. Exactly what I was talking about. I'm on Mitaka, and I will look into that bugfix. I guess I need to test this lol. Thanks, Curtis. ___________________________________________________________________ Kris Lindgren Senior Linux Systems Engineer GoDaddy On 10/12/16, 11:21 AM, "Curtis" wrote: Hi All, I've never used NFS with OpenStack before. But I am now with a small lab deployment with a few compute nodes. Is there anything special I should do with NFS and glance and nova? I remember there was an issue way back when of images being deleted b/c certain components weren't aware they are on NFS. I'm guessing that has changed but just wanted to check if there is anything specific I should be doing configuration-wise. I can't seem to find many examples of NFS usage...so feel free to point me to any documentation, blog posts, etc. I may have just missed it. Thanks, Curtis. _______________________________________________ OpenStack-operators mailing list OpenStack-operators at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators -- Blog: serverascode.com ------------------------------ Message: 14 Date: Wed, 12 Oct 2016 11:34:39 -0700 From: James Penick To: Curtis Cc: "openstack-operators at lists.openstack.org" Subject: Re: [Openstack-operators] glance, nova backed by NFS Message-ID: Content-Type: text/plain; charset="utf-8" Are you backing both glance and nova-compute with NFS? If you're only putting the glance store on NFS you don't need any special changes. It'll Just Work. On Wed, Oct 12, 2016 at 11:18 AM, Curtis wrote: On Wed, Oct 12, 2016 at 11:58 AM, Kris G. Lindgren wrote: We don?t use shared storage at all. But I do remember what you are talking about. The issue is that compute nodes weren?t aware they were on shared storage, and would nuke the backing mage from shared storage, after all vm?s on *that* compute node had stopped using it. Not after all vm?s had stopped using it. https://bugs.launchpad.net/nova/+bug/1620341 - Looks like some code to address that concern has landed but only in trunk maybe mitaka. Any stable releases don?t appear to be shared backing image safe. You might be able to get around this by setting the compute image manager task to not run. But the issue with that will be one missed compute node, and everyone will have a bad day. Cool, thanks Kris. Exactly what I was talking about. I'm on Mitaka, and I will look into that bugfix. I guess I need to test this lol. Thanks, Curtis. ___________________________________________________________________ Kris Lindgren Senior Linux Systems Engineer GoDaddy On 10/12/16, 11:21 AM, "Curtis" wrote: Hi All, I've never used NFS with OpenStack before. But I am now with a small lab deployment with a few compute nodes. Is there anything special I should do with NFS and glance and nova? I remember there was an issue way back when of images being deleted b/c certain components weren't aware they are on NFS. I'm guessing that has changed but just wanted to check if there is anything specific I should be doing configuration-wise. I can't seem to find many examples of NFS usage...so feel free to point me to any documentation, blog posts, etc. I may have just missed it. Thanks, Curtis. _______________________________________________ OpenStack-operators mailing list OpenStack-operators at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/ openstack-operators -- Blog: serverascode.com _______________________________________________ OpenStack-operators mailing list OpenStack-operators at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 15 Date: Wed, 12 Oct 2016 12:49:40 -0600 From: Curtis To: James Penick Cc: "openstack-operators at lists.openstack.org" Subject: Re: [Openstack-operators] glance, nova backed by NFS Message-ID: Content-Type: text/plain; charset=UTF-8 On Wed, Oct 12, 2016 at 12:34 PM, James Penick wrote: Are you backing both glance and nova-compute with NFS? If you're only putting the glance store on NFS you don't need any special changes. It'll Just Work. I've got both glance and nova backed by NFS. Haven't put up cinder yet, but that will also be NFS backed. I just have very limited storage on the compute hosts, basically just enough for the operating system; this is just a small but permanent lab deployment. Good to hear that Glance will Just Work. :) Thanks! Thanks, Curtis. On Wed, Oct 12, 2016 at 11:18 AM, Curtis wrote: On Wed, Oct 12, 2016 at 11:58 AM, Kris G. Lindgren wrote: We don?t use shared storage at all. But I do remember what you are talking about. The issue is that compute nodes weren?t aware they were on shared storage, and would nuke the backing mage from shared storage, after all vm?s on *that* compute node had stopped using it. Not after all vm?s had stopped using it. https://bugs.launchpad.net/nova/+bug/1620341 - Looks like some code to address that concern has landed but only in trunk maybe mitaka. Any stable releases don?t appear to be shared backing image safe. You might be able to get around this by setting the compute image manager task to not run. But the issue with that will be one missed compute node, and everyone will have a bad day. Cool, thanks Kris. Exactly what I was talking about. I'm on Mitaka, and I will look into that bugfix. I guess I need to test this lol. Thanks, Curtis. ___________________________________________________________________ Kris Lindgren Senior Linux Systems Engineer GoDaddy On 10/12/16, 11:21 AM, "Curtis" wrote: Hi All, I've never used NFS with OpenStack before. But I am now with a small lab deployment with a few compute nodes. Is there anything special I should do with NFS and glance and nova? I remember there was an issue way back when of images being deleted b/c certain components weren't aware they are on NFS. I'm guessing that has changed but just wanted to check if there is anything specific I should be doing configuration-wise. I can't seem to find many examples of NFS usage...so feel free to point me to any documentation, blog posts, etc. I may have just missed it. Thanks, Curtis. _______________________________________________ OpenStack-operators mailing list OpenStack-operators at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators -- Blog: serverascode.com _______________________________________________ OpenStack-operators mailing list OpenStack-operators at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators -- Blog: serverascode.com ------------------------------ Message: 16 Date: Thu, 13 Oct 2016 08:24:16 +1300 From: Xav Paice To: Lutz Birkhahn Cc: "openstack-operators at lists.openstack.org" Subject: Re: [Openstack-operators] Ubuntu package for Octavia Message-ID: Content-Type: text/plain; charset="utf-8" I highly recommend looking in to Giftwrap for that, until there's UCA packages. The thing missing from the packages that Giftwrap will produce is init scripts, config file examples, and the various user and directory setup stuff. That's easy enough to put into config management or a separate package if you wanted to. On 13 October 2016 at 01:25, Lutz Birkhahn wrote: Has anyone seen Ubuntu packages for Octavia yet? We?re running Ubuntu 16.04 with Newton, but for whatever reason I can not find any Octavia package? So far I?ve only found in https://wiki.openstack.org/ wiki/Neutron/LBaaS/HowToRun the following: Ubuntu Packages Setup: Install octavia with your favorite distribution: ?pip install octavia? That was not exactly what we would like to do in our production cloud? Thanks, /lutz _______________________________________________ OpenStack-operators mailing list OpenStack-operators at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 17 Date: Wed, 12 Oct 2016 16:02:55 -0400 From: Warren Wang To: Adam Kijak Cc: openstack-operators Subject: Re: [Openstack-operators] [openstack-operators][ceph][nova] How do you handle Nova on Ceph? Message-ID: Content-Type: text/plain; charset="utf-8" If fault domain is a concern, you can always split the cloud up into 3 regions, each having a dedicate Ceph cluster. It isn't necessarily going to mean more hardware, just logical splits. This is kind of assuming that the network doesn't share the same fault domain though. Alternatively, you can split the hardware for the Ceph boxes into multiple clusters, and use multi backend Cinder to talk to the same set of hypervisors to use multiple Ceph clusters. We're doing that to migrate from one Ceph cluster to another. You can even mount a volume from each cluster into a single instance. Keep in mind that you don't really want to shrink a Ceph cluster too much. What's "too big"? You should keep growing so that the fault domains aren't too small (3 physical rack min), or you guarantee that the entire cluster stops if you lose network. Just my 2 cents, Warren On Wed, Oct 12, 2016 at 8:35 AM, Adam Kijak wrote: _______________________________________ From: Abel Lopez Sent: Monday, October 10, 2016 9:57 PM To: Adam Kijak Cc: openstack-operators Subject: Re: [Openstack-operators] [openstack-operators][ceph][nova] How do you handle Nova on Ceph? Have you thought about dedicated pools for cinder/nova and a separate pool for glance, and any other uses you might have? You need to setup secrets on kvm, but you can have cinder creating volumes from glance images quickly in different pools We already have separate pool for images, volumes and instances. Separate pools doesn't really split the failure domain though. Also AFAIK you can't set up multiple pools for instances in nova.conf, right? _______________________________________________ OpenStack-operators mailing list OpenStack-operators at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 18 Date: Wed, 12 Oct 2016 13:46:01 -0700 From: Clint Byrum To: openstack-operators Subject: Re: [Openstack-operators] [openstack-operators][ceph][nova] How do you handle Nova on Ceph? Message-ID: <1476304977-sup-4753 at fewbar.com> Content-Type: text/plain; charset=UTF-8 Excerpts from Adam Kijak's message of 2016-10-12 12:23:41 +0000: ________________________________________ From: Xav Paice Sent: Monday, October 10, 2016 8:41 PM To: openstack-operators at lists.openstack.org Subject: Re: [Openstack-operators] [openstack-operators][ceph][nova] How do you handle Nova on Ceph? On Mon, 2016-10-10 at 13:29 +0000, Adam Kijak wrote: Hello, We use a Ceph cluster for Nova (Glance and Cinder as well) and over time, more and more data is stored there. We can't keep the cluster so big because of Ceph's limitations. Sooner or later it needs to be closed for adding new instances, images and volumes. Not to mention it's a big failure domain. I'm really keen to hear more about those limitations. Basically it's all related to the failure domain ("blast radius") and risk management. Bigger Ceph cluster means more users. Are these risks well documented? Since Ceph is specifically designed _not_ to have the kind of large blast radius that one might see with say, a centralized SAN, I'm curious to hear what events trigger cluster-wide blasts. Growing the Ceph cluster temporary slows it down, so many users will be affected. One might say that a Ceph cluster that can't be grown without the users noticing is an over-subscribed Ceph cluster. My understanding is that one is always advised to provision a certain amount of cluster capacity for growing and replicating to replaced drives. There are bugs in Ceph which can cause data corruption. It's rare, but when it happens it can affect many (maybe all) users of the Ceph cluster. :( ------------------------------ Message: 19 Date: Thu, 13 Oct 2016 13:37:58 +1100 From: Blair Bethwaite To: "openstack-oper." Subject: [Openstack-operators] Disable console for an instance Message-ID: Content-Type: text/plain; charset="utf-8" Hi all, Does anyone know whether there is a way to disable the novnc console on a per instance basis? Cheers, Blair -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 20 Date: Thu, 13 Oct 2016 06:12:59 +0000 From: "Juvonen, Tomi (Nokia - FI/Espoo)" To: "OpenStack-operators at lists.openstack.org" Subject: [Openstack-operators] host maintenance Message-ID: Content-Type: text/plain; charset="us-ascii" Hi, Had the session in Austin summit for the maintenance: https://etherpad.openstack.org/p/AUS-ops-Nova-maint Now the discussion have gotten to a point that should start prototyping a service hosting the maintenance. For maintenance Nova could have a link to this new service, but no functionality for the maintenance should be placed in Nova project. Was working to have this, but now looking better to have the prototype first: https://review.openstack.org/310510/ From the discussion over above review, the new service might have maintenance API connection point that links to host by utilizing "hostid" used in Nova and then there should be "tenant _id" specific end point to get what needed by project. Something like: http://maintenancethingy/maintenance/{hostid} http://maintenancethingy/maintenance/{hostid}/{tenant_id} This will ensure tenant will not know details about host, but can get needed information about maintenance effecting to his instances. In Telco/NFV side we have OPNFV Doctor project that sets the requirements for this from that direction. I am personally interested in that part, but to have this to serve all operator requirements, it is best to bring this here. This could be further discussed in Barcelona and should get other people interested to help starting with this. Any suggestion for the Ops session? Looking forward, Tomi -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 21 Date: Thu, 13 Oct 2016 15:19:51 +0800 From: Tom Fifield To: OpenStack Operators Subject: [Openstack-operators] Ops at Barcelona - Call for Moderators Message-ID: Content-Type: text/plain; charset=utf-8; format=flowed Hello all, The Ops design summit sessions are now listed on the schedule! https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=Ops+Summit%3A Please tick them and set up your summit app :) We are still looking for moderators for the following sessions: * OpenStack on Containers * Containers on OpenStack * Migration to OpenStack * Fleet Management * Feedback to PWG * Neutron pain points * Config Mgmt * HAProy, MySQL, Rabbit Tuning * Swift * Horizon * OpenStack CLI * Baremetal Deploy * OsOps * CI/CD workflows * Alt Deployment tech * ControlPlane Design(multi region) * Docs ==> If you are interested in moderating a session, please * write your name in its etherpad (multiple moderators OK!) ==> I'll be honest, I have no idea what some of the sessions are supposed to be, so also: * write a short description for the session so the agenda can be updated For those of you who want to know what it takes check out the Moderator's Guide: https://wiki.openstack.org/wiki/Operations/Meetups#Moderators_Guide & ask questions - we're here to help! Regards, Tom, on behalf of the Ops Meetups Team https://wiki.openstack.org/wiki/Ops_Meetups_Team ------------------------------ _______________________________________________ OpenStack-operators mailing list OpenStack-operators at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators End of OpenStack-operators Digest, Vol 72, Issue 11 *************************************************** -------------- next part -------------- An HTML attachment was scrubbed... URL: From patricia.dugan at oneops.com Fri Oct 14 18:08:11 2016 From: patricia.dugan at oneops.com (Patricia Dugan) Date: Fri, 14 Oct 2016 11:08:11 -0700 Subject: [Openstack-operators] the url for moderator request In-Reply-To: <73699A79-7008-4F93-BC1B-B12F0F7B6EE5@workday.com> References: <594C6030-2A95-4D3A-B52E-CBDAE1A5F0F0@oneops.com> <73699A79-7008-4F93-BC1B-B12F0F7B6EE5@workday.com> Message-ID: <2AE68B42-8E83-40BE-B32A-6E0DC65131D7@oneops.com> Thanks, EM. I was confirming the url, the location to view the need and supply our availability. We?ve added our information to the url I provided. Let me know if that?s not the location we need to be listing our availability to contribute on. Thx. See you soon! > On Oct 14, 2016, at 11:05 AM, Edgar Magana wrote: > > Patricia, > > I think we are announcing which session we would like to moderate and add our names in the respective etherpad. > > Thanks, > > Edgar > > From: Patricia Dugan > Date: Thursday, October 13, 2016 at 7:33 AM > To: "openstack-operators at lists.openstack.org" > Subject: [Openstack-operators] the url for moderator request > > Is this the url for the moderator request: https://etherpad.openstack.org/p/BCN-ops-meetup << at the bottom, is that where we are supposed to fill out? > > > On Oct 13, 2016, at 12:19 AM, openstack-operators-request at lists.openstack.org wrote: > > Send OpenStack-operators mailing list submissions to > openstack-operators at lists.openstack.org > > To subscribe or unsubscribe via the World Wide Web, visit > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > or, via email, send a message with subject or body 'help' to > openstack-operators-request at lists.openstack.org > > You can reach the person managing the list at > openstack-operators-owner at lists.openstack.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of OpenStack-operators digest..." > > > Today's Topics: > > 1. Re: [openstack-operators][ceph][nova] How do you handle Nova > on Ceph? (Adam Kijak) > 2. Ubuntu package for Octavia (Lutz Birkhahn) > 3. Re: [openstack-operators][ceph][nova] How do you handle Nova > on Ceph? (Adam Kijak) > 4. [nova] Does anyone use the os-diagnostics API? (Matt Riedemann) > 5. OPNFV delivered its new Colorado release (Ulrich Kleber) > 6. Re: [nova] Does anyone use the os-diagnostics API? (Tim Bell) > 7. Re: [nova] Does anyone use the os-diagnostics > API? (Joe Topjian) > 8. Re: OPNFV delivered its new Colorado release (Jay Pipes) > 9. glance, nova backed by NFS (Curtis) > 10. Re: glance, nova backed by NFS (Kris G. Lindgren) > 11. Re: glance, nova backed by NFS (Tobias Sch?n) > 12. Re: glance, nova backed by NFS (Kris G. Lindgren) > 13. Re: glance, nova backed by NFS (Curtis) > 14. Re: glance, nova backed by NFS (James Penick) > 15. Re: glance, nova backed by NFS (Curtis) > 16. Re: Ubuntu package for Octavia (Xav Paice) > 17. Re: [openstack-operators][ceph][nova] How do you handle Nova > on Ceph? (Warren Wang) > 18. Re: [openstack-operators][ceph][nova] How do > you handle Nova > on Ceph? (Clint Byrum) > 19. Disable console for an instance (Blair Bethwaite) > 20. host maintenance (Juvonen, Tomi (Nokia - FI/Espoo)) > 21. Ops at Barcelona - Call for Moderators (Tom Fifield) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Wed, 12 Oct 2016 12:23:41 +0000 > From: Adam Kijak > To: Xav Paice , > "openstack-operators at lists.openstack.org" > > Subject: Re: [Openstack-operators] [openstack-operators][ceph][nova] > How do you handle Nova on Ceph? > Message-ID: <839b3aba73394bf9aae56c801687e50c at corp.ovh.com> > Content-Type: text/plain; charset="iso-8859-1" > > > ________________________________________ > From: Xav Paice > Sent: Monday, October 10, 2016 8:41 PM > To: openstack-operators at lists.openstack.org > Subject: Re: [Openstack-operators] [openstack-operators][ceph][nova] How do you handle Nova on Ceph? > > On Mon, 2016-10-10 at 13:29 +0000, Adam Kijak wrote: > > Hello, > > We use a Ceph cluster for Nova (Glance and Cinder as well) and over > time, > more and more data is stored there. We can't keep the cluster so big > because of > Ceph's limitations. Sooner or later it needs to be closed for adding > new > instances, images and volumes. Not to mention it's a big failure > domain. > > I'm really keen to hear more about those limitations. > > Basically it's all related to the failure domain ("blast radius") and risk management. > Bigger Ceph cluster means more users. > Growing the Ceph cluster temporary slows it down, so many users will be affected. > There are bugs in Ceph which can cause data corruption. It's rare, but when it happens > it can affect many (maybe all) users of the Ceph cluster. > > > > How do you handle this issue? > What is your strategy to divide Ceph clusters between compute nodes? > How do you solve VM snapshot placement and migration issues then > (snapshots will be left on older Ceph)? > > Having played with Ceph and compute on the same hosts, I'm a big fan of > separating them and having dedicated Ceph hosts, and dedicated compute > hosts. That allows me a lot more flexibility with hardware > configuration and maintenance, easier troubleshooting for resource > contention, and also allows scaling at different rates. > > Exactly, I consider it the best practice as well. > > > > > ------------------------------ > > Message: 2 > Date: Wed, 12 Oct 2016 12:25:38 +0000 > From: Lutz Birkhahn > To: "openstack-operators at lists.openstack.org" > > Subject: [Openstack-operators] Ubuntu package for Octavia > Message-ID: > Content-Type: text/plain; charset="utf-8" > > Has anyone seen Ubuntu packages for Octavia yet? > > We?re running Ubuntu 16.04 with Newton, but for whatever reason I can not find any Octavia package? > > So far I?ve only found in https://wiki.openstack.org/wiki/Neutron/LBaaS/HowToRun the following: > > Ubuntu Packages Setup: Install octavia with your favorite distribution: ?pip install octavia? > > That was not exactly what we would like to do in our production cloud? > > Thanks, > > /lutz > -------------- next part -------------- > A non-text attachment was scrubbed... > Name: smime.p7s > Type: application/x-pkcs7-signature > Size: 6404 bytes > Desc: not available > URL: > > ------------------------------ > > Message: 3 > Date: Wed, 12 Oct 2016 12:35:48 +0000 > From: Adam Kijak > To: Abel Lopez > Cc: openstack-operators > Subject: Re: [Openstack-operators] [openstack-operators][ceph][nova] > How do you handle Nova on Ceph? > Message-ID: <29c531e1c0614dc1bd1cf587d69aa45b at corp.ovh.com> > Content-Type: text/plain; charset="iso-8859-1" > > > _______________________________________ > From: Abel Lopez > Sent: Monday, October 10, 2016 9:57 PM > To: Adam Kijak > Cc: openstack-operators > Subject: Re: [Openstack-operators] [openstack-operators][ceph][nova] How do you handle Nova on Ceph? > > Have you thought about dedicated pools for cinder/nova and a separate pool for glance, and any other uses you might have? > You need to setup secrets on kvm, but you can have cinder creating volumes from glance images quickly in different pools > > We already have separate pool for images, volumes and instances. > Separate pools doesn't really split the failure domain though. > Also AFAIK you can't set up multiple pools for instances in nova.conf, right? > > > > ------------------------------ > > Message: 4 > Date: Wed, 12 Oct 2016 09:00:11 -0500 > From: Matt Riedemann > To: "openstack-operators at lists.openstack.org" > > Subject: [Openstack-operators] [nova] Does anyone use the > os-diagnostics API? > Message-ID: <5dae5c89-b682-15c7-11c6-d9a5481076a4 at linux.vnet.ibm.com> > Content-Type: text/plain; charset=utf-8; format=flowed > > The current form of the nova os-diagnostics API is hypervisor-specific, > which makes it pretty unusable in any generic way, which is why Tempest > doesn't test it. > > Way back when the v3 API was a thing for 2 minutes there was work done > to standardize the diagnostics information across virt drivers in nova. > The only thing is we haven't exposed that out of the REST API yet, but > there is a spec proposing to do that now: > > https://review.openstack.org/#/c/357884/ > > This is an admin-only API so we're trying to keep an end user point of > view out of discussing it. For example, the disk details don't have any > unique identifier. We could add one, but would it be useful to an admin? > > This API is really supposed to be for debug, but the question I have for > this list is does anyone actually use the existing os-diagnostics API? > And if so, how do you use it, and what information is most useful? If > you are using it, please review the spec and provide any input on what's > proposed for outputs. > > -- > > Thanks, > > Matt Riedemann > > > > > ------------------------------ > > Message: 5 > Date: Wed, 12 Oct 2016 14:17:50 +0000 > From: Ulrich Kleber > To: "openstack-operators at lists.openstack.org" > > Subject: [Openstack-operators] OPNFV delivered its new Colorado > release > Message-ID: <884BFBB6F562F44F91BF83F77C24E9972E7EB5F0 at lhreml507-mbx> > Content-Type: text/plain; charset="iso-8859-1" > > Hi, > I didn't see an official announcement, so I like to point you to the new release of OPNFV. > https://www.opnfv.org/news-faq/press-release/2016/09/open-source-nfv-project-delivers-third-platform-release-introduces-0 > OPNFV is an open source project and one of the most important users of OpenStack in the Telecom/NFV area. Maybe it is interesting for your work. > Feel free to contact me or meet during the Barcelona summit at the session of the OpenStack Operators Telecom/NFV Functional Team (https://www.openstack.org/summit/barcelona-2016/summit-schedule/events/16768/openstack-operators-telecomnfv-functional-team). > Cheers, > Uli > > > Ulrich KLEBER > Chief Architect Cloud Platform > European Research Center > IT R&D Division > [huawei_logo] > Riesstra?e 25 > 80992 M?nchen > Mobile: +49 (0)173 4636144 > Mobile (China): +86 13005480404 > > > > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > -------------- next part -------------- > A non-text attachment was scrubbed... > Name: image001.jpg > Type: image/jpeg > Size: 6737 bytes > Desc: image001.jpg > URL: > > ------------------------------ > > Message: 6 > Date: Wed, 12 Oct 2016 14:35:54 +0000 > From: Tim Bell > To: Matt Riedemann > Cc: "openstack-operators at lists.openstack.org" > > Subject: Re: [Openstack-operators] [nova] Does anyone use the > os-diagnostics API? > Message-ID: <248C8965-1ECE-4CA3-9B88-A7C75CF8B3AD at cern.ch> > Content-Type: text/plain; charset="utf-8" > > > > On 12 Oct 2016, at 07:00, Matt Riedemann wrote: > > The current form of the nova os-diagnostics API is hypervisor-specific, which makes it pretty unusable in any generic way, which is why Tempest doesn't test it. > > Way back when the v3 API was a thing for 2 minutes there was work done to standardize the diagnostics information across virt drivers in nova. The only thing is we haven't exposed that out of the REST API yet, but there is a spec proposing to do that now: > > https://review.openstack.org/#/c/357884/ > > This is an admin-only API so we're trying to keep an end user point of view out of discussing it. For example, the disk details don't have any unique identifier. We could add one, but would it be useful to an admin? > > This API is really supposed to be for debug, but the question I have for this list is does anyone actually use the existing os-diagnostics API? And if so, how do you use it, and what information is most useful? If you are using it, please review the spec and provide any input on what's proposed for outputs. > > > Matt, > > Thanks for asking. We?ve used the API in the past as a way of getting the usage data out of Nova. We had problems running ceilometer at scale and this was a way of retrieving the data for our accounting reports. We created a special policy configuration to allow authorised users query this data without full admin rights. > > From the look of the new spec, it would be fairly straightforward to adapt the process to use the new format as all the CPU utilisation data is there. > > Tim > > > -- > > Thanks, > > Matt Riedemann > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > ------------------------------ > > Message: 7 > Date: Wed, 12 Oct 2016 08:44:14 -0600 > From: Joe Topjian > To: "openstack-operators at lists.openstack.org" > > Subject: Re: [Openstack-operators] [nova] Does anyone use the > os-diagnostics > API? > Message-ID: > > Content-Type: text/plain; charset="utf-8" > > Hi Matt, Tim, > > Thanks for asking. We?ve used the API in the past as a way of getting the > > usage data out of Nova. We had problems running ceilometer at scale and > this was a way of retrieving the data for our accounting reports. We > created a special policy configuration to allow authorised users query this > data without full admin rights. > > > We do this as well. > > > > From the look of the new spec, it would be fairly straightforward to adapt > the process to use the new format as all the CPU utilisation data is there. > > > I agree. > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > > ------------------------------ > > Message: 8 > Date: Wed, 12 Oct 2016 13:02:11 -0400 > From: Jay Pipes > To: openstack-operators at lists.openstack.org > Subject: Re: [Openstack-operators] OPNFV delivered its new Colorado > release > Message-ID: > Content-Type: text/plain; charset=windows-1252; format=flowed > > On 10/12/2016 10:17 AM, Ulrich Kleber wrote: > > Hi, > > I didn?t see an official announcement, so I like to point you to the new > release of OPNFV. > > https://www.opnfv.org/news-faq/press-release/2016/09/open-source-nfv-project-delivers-third-platform-release-introduces-0 > > OPNFV is an open source project and one of the most important users of > OpenStack in the Telecom/NFV area. Maybe it is interesting for your work. > > Hi Ulrich, > > I'm hoping you can explain to me what exactly OPNFV is producing in its > releases. I've been through a number of the Jira items linked in the > press release above and simply cannot tell what is being actually > delivered by OPNFV versus what is just something that is in an OpenStack > component or deployment. > > A good example of this is the IPV6 project's Jira item here: > > https://jira.opnfv.org/browse/IPVSIX-37 > > Which has the title of "Auto-installation of both underlay IPv6 and > overlay IPv6". The issue is marked as "Fixed" in Colorado 1.0. However, > I can't tell what code was produced in OPNFV that delivers the > auto-installation of both an underlay IPv6 and an overlay IPv6. > > In short, I'm confused about what OPNFV is producing and hope to get > some insights from you. > > Best, > -jay > > > > ------------------------------ > > Message: 9 > Date: Wed, 12 Oct 2016 11:21:27 -0600 > From: Curtis > To: "openstack-operators at lists.openstack.org" > > Subject: [Openstack-operators] glance, nova backed by NFS > Message-ID: > > Content-Type: text/plain; charset=UTF-8 > > Hi All, > > I've never used NFS with OpenStack before. But I am now with a small > lab deployment with a few compute nodes. > > Is there anything special I should do with NFS and glance and nova? I > remember there was an issue way back when of images being deleted b/c > certain components weren't aware they are on NFS. I'm guessing that > has changed but just wanted to check if there is anything specific I > should be doing configuration-wise. > > I can't seem to find many examples of NFS usage...so feel free to > point me to any documentation, blog posts, etc. I may have just missed > it. > > Thanks, > Curtis. > > > > ------------------------------ > > Message: 10 > Date: Wed, 12 Oct 2016 17:58:47 +0000 > From: "Kris G. Lindgren" > To: Curtis , > "openstack-operators at lists.openstack.org" > > Subject: Re: [Openstack-operators] glance, nova backed by NFS > Message-ID: <94226DBF-0D6F-4585-9341-37E193C5F0E6 at godaddy.com> > Content-Type: text/plain; charset="utf-8" > > We don?t use shared storage at all. But I do remember what you are talking about. The issue is that compute nodes weren?t aware they were on shared storage, and would nuke the backing mage from shared storage, after all vm?s on *that* compute node had stopped using it. Not after all vm?s had stopped using it. > > https://bugs.launchpad.net/nova/+bug/1620341 - Looks like some code to address that concern has landed but only in trunk maybe mitaka. Any stable releases don?t appear to be shared backing image safe. > > You might be able to get around this by setting the compute image manager task to not run. But the issue with that will be one missed compute node, and everyone will have a bad day. > > ___________________________________________________________________ > Kris Lindgren > Senior Linux Systems Engineer > GoDaddy > > On 10/12/16, 11:21 AM, "Curtis" wrote: > > Hi All, > > I've never used NFS with OpenStack before. But I am now with a small > lab deployment with a few compute nodes. > > Is there anything special I should do with NFS and glance and nova? I > remember there was an issue way back when of images being deleted b/c > certain components weren't aware they are on NFS. I'm guessing that > has changed but just wanted to check if there is anything specific I > should be doing configuration-wise. > > I can't seem to find many examples of NFS usage...so feel free to > point me to any documentation, blog posts, etc. I may have just missed > it. > > Thanks, > Curtis. > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > > ------------------------------ > > Message: 11 > Date: Wed, 12 Oct 2016 17:59:13 +0000 > From: Tobias Sch?n > To: Curtis , > "openstack-operators at lists.openstack.org" > > Subject: Re: [Openstack-operators] glance, nova backed by NFS > Message-ID: <58f74b23f1254f5886df90183092a32b at elara.ad.fiberdata.se> > Content-Type: text/plain; charset="iso-8859-1" > > Hi, > > We have an environment with glance and cinder using NFS. > It's important that they have the correct rights. The shares should be owned by nova on compute if mounted up on /var/lib/nova/instances > And the same for nova and glance on the controller.. > > It's important that you map the glance and nova up in fstab. > > The cinder one is controlled by the nfsdriver. > > We are running rhelosp6, Openstack Juno. > > This parameter is used: > nfs_shares_config=/etc/cinder/shares-nfs.conf in the /etc/cinder/cinder.conf file and then we have specified the share in /etc/cinder/shares-nfs.conf. > > chmod 0640 /etc/cinder/shares-nfs.conf > > setsebool -P virt_use_nfs on > This one is important to make it work with SELinux > > How up to date this is actually I don't know tbh, but it was up to date as of redhat documentation when we deployed it around 1.5y ago. > > //Tobias > > -----Ursprungligt meddelande----- > Fr?n: Curtis [mailto:serverascode at gmail.com] > Skickat: den 12 oktober 2016 19:21 > Till: openstack-operators at lists.openstack.org > ?mne: [Openstack-operators] glance, nova backed by NFS > > Hi All, > > I've never used NFS with OpenStack before. But I am now with a small lab deployment with a few compute nodes. > > Is there anything special I should do with NFS and glance and nova? I remember there was an issue way back when of images being deleted b/c certain components weren't aware they are on NFS. I'm guessing that has changed but just wanted to check if there is anything specific I should be doing configuration-wise. > > I can't seem to find many examples of NFS usage...so feel free to point me to any documentation, blog posts, etc. I may have just missed it. > > Thanks, > Curtis. > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > > ------------------------------ > > Message: 12 > Date: Wed, 12 Oct 2016 18:06:31 +0000 > From: "Kris G. Lindgren" > To: Tobias Sch?n , Curtis > , "openstack-operators at lists.openstack.org" > > Subject: Re: [Openstack-operators] glance, nova backed by NFS > Message-ID: <88AE471E-81CD-4E72-935D-4390C05F5D33 at godaddy.com> > Content-Type: text/plain; charset="utf-8" > > Tobias does bring up something that we have ran into before. > > With NFSv3 user mapping is done by ID, so you need to ensure that all of your servers use the same UID for nova/glance. If you are using packages/automation that do useradd?s without the same userid its *VERY* easy to have mismatched username/uid?s across multiple boxes. > > NFSv4, iirc, sends the username and the nfs server does the translation of the name to uid, so it should not have this issue. But we have been bit by that more than once on nfsv3. > > > ___________________________________________________________________ > Kris Lindgren > Senior Linux Systems Engineer > GoDaddy > > On 10/12/16, 11:59 AM, "Tobias Sch?n" wrote: > > Hi, > > We have an environment with glance and cinder using NFS. > It's important that they have the correct rights. The shares should be owned by nova on compute if mounted up on /var/lib/nova/instances > And the same for nova and glance on the controller.. > > It's important that you map the glance and nova up in fstab. > > The cinder one is controlled by the nfsdriver. > > We are running rhelosp6, Openstack Juno. > > This parameter is used: > nfs_shares_config=/etc/cinder/shares-nfs.conf in the /etc/cinder/cinder.conf file and then we have specified the share in /etc/cinder/shares-nfs.conf. > > chmod 0640 /etc/cinder/shares-nfs.conf > > setsebool -P virt_use_nfs on > This one is important to make it work with SELinux > > How up to date this is actually I don't know tbh, but it was up to date as of redhat documentation when we deployed it around 1.5y ago. > > //Tobias > > -----Ursprungligt meddelande----- > Fr?n: Curtis [mailto:serverascode at gmail.com] > Skickat: den 12 oktober 2016 19:21 > Till: openstack-operators at lists.openstack.org > ?mne: [Openstack-operators] glance, nova backed by NFS > > Hi All, > > I've never used NFS with OpenStack before. But I am now with a small lab deployment with a few compute nodes. > > Is there anything special I should do with NFS and glance and nova? I remember there was an issue way back when of images being deleted b/c certain components weren't aware they are on NFS. I'm guessing that has changed but just wanted to check if there is anything specific I should be doing configuration-wise. > > I can't seem to find many examples of NFS usage...so feel free to point me to any documentation, blog posts, etc. I may have just missed it. > > Thanks, > Curtis. > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > > ------------------------------ > > Message: 13 > Date: Wed, 12 Oct 2016 12:18:40 -0600 > From: Curtis > To: "Kris G. Lindgren" > Cc: "openstack-operators at lists.openstack.org" > > Subject: Re: [Openstack-operators] glance, nova backed by NFS > Message-ID: > > Content-Type: text/plain; charset=UTF-8 > > On Wed, Oct 12, 2016 at 11:58 AM, Kris G. Lindgren > wrote: > > We don?t use shared storage at all. But I do remember what you are talking about. The issue is that compute nodes weren?t aware they were on shared storage, and would nuke the backing mage from shared storage, after all vm?s on *that* compute node had stopped using it. Not after all vm?s had stopped using it. > > https://bugs.launchpad.net/nova/+bug/1620341 - Looks like some code to address that concern has landed but only in trunk maybe mitaka. Any stable releases don?t appear to be shared backing image safe. > > You might be able to get around this by setting the compute image manager task to not run. But the issue with that will be one missed compute node, and everyone will have a bad day. > > Cool, thanks Kris. Exactly what I was talking about. I'm on Mitaka, > and I will look into that bugfix. I guess I need to test this lol. > > Thanks, > Curtis. > > > > ___________________________________________________________________ > Kris Lindgren > Senior Linux Systems Engineer > GoDaddy > > On 10/12/16, 11:21 AM, "Curtis" wrote: > > Hi All, > > I've never used NFS with OpenStack before. But I am now with a small > lab deployment with a few compute nodes. > > Is there anything special I should do with NFS and glance and nova? I > remember there was an issue way back when of images being deleted b/c > certain components weren't aware they are on NFS. I'm guessing that > has changed but just wanted to check if there is anything specific I > should be doing configuration-wise. > > I can't seem to find many examples of NFS usage...so feel free to > point me to any documentation, blog posts, etc. I may have just missed > it. > > Thanks, > Curtis. > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > > > > -- > Blog: serverascode.com > > > > ------------------------------ > > Message: 14 > Date: Wed, 12 Oct 2016 11:34:39 -0700 > From: James Penick > To: Curtis > Cc: "openstack-operators at lists.openstack.org" > > Subject: Re: [Openstack-operators] glance, nova backed by NFS > Message-ID: > > Content-Type: text/plain; charset="utf-8" > > Are you backing both glance and nova-compute with NFS? If you're only > putting the glance store on NFS you don't need any special changes. It'll > Just Work. > > On Wed, Oct 12, 2016 at 11:18 AM, Curtis wrote: > > > On Wed, Oct 12, 2016 at 11:58 AM, Kris G. Lindgren > wrote: > > We don?t use shared storage at all. But I do remember what you are > talking about. The issue is that compute nodes weren?t aware they were on > shared storage, and would nuke the backing mage from shared storage, after > all vm?s on *that* compute node had stopped using it. Not after all vm?s > had stopped using it. > > > https://bugs.launchpad.net/nova/+bug/1620341 - Looks like some code to > address that concern has landed but only in trunk maybe mitaka. Any > stable releases don?t appear to be shared backing image safe. > > > You might be able to get around this by setting the compute image > manager task to not run. But the issue with that will be one missed > compute node, and everyone will have a bad day. > > Cool, thanks Kris. Exactly what I was talking about. I'm on Mitaka, > and I will look into that bugfix. I guess I need to test this lol. > > Thanks, > Curtis. > > > > ___________________________________________________________________ > Kris Lindgren > Senior Linux Systems Engineer > GoDaddy > > On 10/12/16, 11:21 AM, "Curtis" wrote: > > Hi All, > > I've never used NFS with OpenStack before. But I am now with a small > lab deployment with a few compute nodes. > > Is there anything special I should do with NFS and glance and nova? I > remember there was an issue way back when of images being deleted b/c > certain components weren't aware they are on NFS. I'm guessing that > has changed but just wanted to check if there is anything specific I > should be doing configuration-wise. > > I can't seem to find many examples of NFS usage...so feel free to > point me to any documentation, blog posts, etc. I may have just > missed > > it. > > Thanks, > Curtis. > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/ > openstack-operators > > > > > > > -- > Blog: serverascode.com > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > > ------------------------------ > > Message: 15 > Date: Wed, 12 Oct 2016 12:49:40 -0600 > From: Curtis > To: James Penick > Cc: "openstack-operators at lists.openstack.org" > > Subject: Re: [Openstack-operators] glance, nova backed by NFS > Message-ID: > > Content-Type: text/plain; charset=UTF-8 > > On Wed, Oct 12, 2016 at 12:34 PM, James Penick wrote: > > Are you backing both glance and nova-compute with NFS? If you're only > putting the glance store on NFS you don't need any special changes. It'll > Just Work. > > I've got both glance and nova backed by NFS. Haven't put up cinder > yet, but that will also be NFS backed. I just have very limited > storage on the compute hosts, basically just enough for the operating > system; this is just a small but permanent lab deployment. Good to > hear that Glance will Just Work. :) Thanks! > > Thanks, > Curtis. > > > > On Wed, Oct 12, 2016 at 11:18 AM, Curtis wrote: > > > On Wed, Oct 12, 2016 at 11:58 AM, Kris G. Lindgren > wrote: > > We don?t use shared storage at all. But I do remember what you are > talking about. The issue is that compute nodes weren?t aware they were on > shared storage, and would nuke the backing mage from shared storage, after > all vm?s on *that* compute node had stopped using it. Not after all vm?s had > stopped using it. > > https://bugs.launchpad.net/nova/+bug/1620341 - Looks like some code to > address that concern has landed but only in trunk maybe mitaka. Any stable > releases don?t appear to be shared backing image safe. > > You might be able to get around this by setting the compute image > manager task to not run. But the issue with that will be one missed compute > node, and everyone will have a bad day. > > Cool, thanks Kris. Exactly what I was talking about. I'm on Mitaka, > and I will look into that bugfix. I guess I need to test this lol. > > Thanks, > Curtis. > > > > ___________________________________________________________________ > Kris Lindgren > Senior Linux Systems Engineer > GoDaddy > > On 10/12/16, 11:21 AM, "Curtis" wrote: > > Hi All, > > I've never used NFS with OpenStack before. But I am now with a small > lab deployment with a few compute nodes. > > Is there anything special I should do with NFS and glance and nova? > I > remember there was an issue way back when of images being deleted > b/c > certain components weren't aware they are on NFS. I'm guessing that > has changed but just wanted to check if there is anything specific I > should be doing configuration-wise. > > I can't seem to find many examples of NFS usage...so feel free to > point me to any documentation, blog posts, etc. I may have just > missed > it. > > Thanks, > Curtis. > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > > > > -- > Blog: serverascode.com > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > > > > -- > Blog: serverascode.com > > > > ------------------------------ > > Message: 16 > Date: Thu, 13 Oct 2016 08:24:16 +1300 > From: Xav Paice > To: Lutz Birkhahn > Cc: "openstack-operators at lists.openstack.org" > > Subject: Re: [Openstack-operators] Ubuntu package for Octavia > Message-ID: > > Content-Type: text/plain; charset="utf-8" > > I highly recommend looking in to Giftwrap for that, until there's UCA > packages. > > The thing missing from the packages that Giftwrap will produce is init > scripts, config file examples, and the various user and directory setup > stuff. That's easy enough to put into config management or a separate > package if you wanted to. > > On 13 October 2016 at 01:25, Lutz Birkhahn wrote: > > > Has anyone seen Ubuntu packages for Octavia yet? > > We?re running Ubuntu 16.04 with Newton, but for whatever reason I can not > find any Octavia package? > > So far I?ve only found in https://wiki.openstack.org/ > wiki/Neutron/LBaaS/HowToRun the following: > > Ubuntu Packages Setup: Install octavia with your favorite > distribution: ?pip install octavia? > > That was not exactly what we would like to do in our production cloud? > > Thanks, > > /lutz > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > > ------------------------------ > > Message: 17 > Date: Wed, 12 Oct 2016 16:02:55 -0400 > From: Warren Wang > To: Adam Kijak > Cc: openstack-operators > Subject: Re: [Openstack-operators] [openstack-operators][ceph][nova] > How do you handle Nova on Ceph? > Message-ID: > > Content-Type: text/plain; charset="utf-8" > > If fault domain is a concern, you can always split the cloud up into 3 > regions, each having a dedicate Ceph cluster. It isn't necessarily going to > mean more hardware, just logical splits. This is kind of assuming that the > network doesn't share the same fault domain though. > > Alternatively, you can split the hardware for the Ceph boxes into multiple > clusters, and use multi backend Cinder to talk to the same set of > hypervisors to use multiple Ceph clusters. We're doing that to migrate from > one Ceph cluster to another. You can even mount a volume from each cluster > into a single instance. > > Keep in mind that you don't really want to shrink a Ceph cluster too much. > What's "too big"? You should keep growing so that the fault domains aren't > too small (3 physical rack min), or you guarantee that the entire cluster > stops if you lose network. > > Just my 2 cents, > Warren > > On Wed, Oct 12, 2016 at 8:35 AM, Adam Kijak wrote: > > > _______________________________________ > From: Abel Lopez > Sent: Monday, October 10, 2016 9:57 PM > To: Adam Kijak > Cc: openstack-operators > Subject: Re: [Openstack-operators] [openstack-operators][ceph][nova] > How do you handle Nova on Ceph? > > > Have you thought about dedicated pools for cinder/nova and a separate > pool for glance, and any other uses you might have? > > You need to setup secrets on kvm, but you can have cinder creating > volumes from glance images quickly in different pools > > We already have separate pool for images, volumes and instances. > Separate pools doesn't really split the failure domain though. > Also AFAIK you can't set up multiple pools for instances in nova.conf, > right? > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > > ------------------------------ > > Message: 18 > Date: Wed, 12 Oct 2016 13:46:01 -0700 > From: Clint Byrum > To: openstack-operators > Subject: Re: [Openstack-operators] [openstack-operators][ceph][nova] > How do > you handle Nova on Ceph? > Message-ID: <1476304977-sup-4753 at fewbar.com> > Content-Type: text/plain; charset=UTF-8 > > Excerpts from Adam Kijak's message of 2016-10-12 12:23:41 +0000: > > ________________________________________ > From: Xav Paice > Sent: Monday, October 10, 2016 8:41 PM > To: openstack-operators at lists.openstack.org > Subject: Re: [Openstack-operators] [openstack-operators][ceph][nova] How do you handle Nova on Ceph? > > On Mon, 2016-10-10 at 13:29 +0000, Adam Kijak wrote: > > Hello, > > We use a Ceph cluster for Nova (Glance and Cinder as well) and over > time, > more and more data is stored there. We can't keep the cluster so big > because of > Ceph's limitations. Sooner or later it needs to be closed for adding > new > instances, images and volumes. Not to mention it's a big failure > domain. > > I'm really keen to hear more about those limitations. > > Basically it's all related to the failure domain ("blast radius") and risk management. > Bigger Ceph cluster means more users. > > Are these risks well documented? Since Ceph is specifically designed > _not_ to have the kind of large blast radius that one might see with > say, a centralized SAN, I'm curious to hear what events trigger > cluster-wide blasts. > > > Growing the Ceph cluster temporary slows it down, so many users will be affected. > > One might say that a Ceph cluster that can't be grown without the users > noticing is an over-subscribed Ceph cluster. My understanding is that > one is always advised to provision a certain amount of cluster capacity > for growing and replicating to replaced drives. > > > There are bugs in Ceph which can cause data corruption. It's rare, but when it happens > it can affect many (maybe all) users of the Ceph cluster. > > > :( > > > > ------------------------------ > > Message: 19 > Date: Thu, 13 Oct 2016 13:37:58 +1100 > From: Blair Bethwaite > To: "openstack-oper." > Subject: [Openstack-operators] Disable console for an instance > Message-ID: > > Content-Type: text/plain; charset="utf-8" > > Hi all, > > Does anyone know whether there is a way to disable the novnc console on a > per instance basis? > > Cheers, > Blair > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > > ------------------------------ > > Message: 20 > Date: Thu, 13 Oct 2016 06:12:59 +0000 > From: "Juvonen, Tomi (Nokia - FI/Espoo)" > To: "OpenStack-operators at lists.openstack.org" > > Subject: [Openstack-operators] host maintenance > Message-ID: > > > Content-Type: text/plain; charset="us-ascii" > > Hi, > > Had the session in Austin summit for the maintenance: > https://etherpad.openstack.org/p/AUS-ops-Nova-maint > > Now the discussion have gotten to a point that should start prototyping a service hosting the maintenance. For maintenance Nova could have a link to this new service, but no functionality for the maintenance should be placed in Nova project. Was working to have this, but now looking better to have the prototype first: > https://review.openstack.org/310510/ > > > From the discussion over above review, the new service might have maintenance API connection point that links to host by utilizing "hostid" used in Nova and then there should be "tenant _id" specific end point to get what needed by project. Something like: > http://maintenancethingy/maintenance/{hostid} > http://maintenancethingy/maintenance/{hostid}/{tenant_id} > This will ensure tenant will not know details about host, but can get needed information about maintenance effecting to his instances. > > In Telco/NFV side we have OPNFV Doctor project that sets the requirements for this from that direction. I am personally interested in that part, but to have this to serve all operator requirements, it is best to bring this here. > > This could be further discussed in Barcelona and should get other people interested to help starting with this. Any suggestion for the Ops session? > > Looking forward, > Tomi > > > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > > ------------------------------ > > Message: 21 > Date: Thu, 13 Oct 2016 15:19:51 +0800 > From: Tom Fifield > To: OpenStack Operators > Subject: [Openstack-operators] Ops at Barcelona - Call for Moderators > Message-ID: > Content-Type: text/plain; charset=utf-8; format=flowed > > Hello all, > > The Ops design summit sessions are now listed on the schedule! > > > https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=Ops+Summit%3A > > Please tick them and set up your summit app :) > > > We are still looking for moderators for the following sessions: > > * OpenStack on Containers > * Containers on OpenStack > * Migration to OpenStack > * Fleet Management > * Feedback to PWG > * Neutron pain points > * Config Mgmt > * HAProy, MySQL, Rabbit Tuning > * Swift > * Horizon > * OpenStack CLI > * Baremetal Deploy > * OsOps > * CI/CD workflows > * Alt Deployment tech > * ControlPlane Design(multi region) > * Docs > > > ==> If you are interested in moderating a session, please > > * write your name in its etherpad (multiple moderators OK!) > > ==> I'll be honest, I have no idea what some of the sessions are > supposed to be, so also: > > * write a short description for the session so the agenda can be updated > > > For those of you who want to know what it takes check out the > Moderator's Guide: > https://wiki.openstack.org/wiki/Operations/Meetups#Moderators_Guide & > ask questions - we're here to help! > > > > Regards, > > > Tom, on behalf of the Ops Meetups Team > https://wiki.openstack.org/wiki/Ops_Meetups_Team > > > > ------------------------------ > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > End of OpenStack-operators Digest, Vol 72, Issue 11 > *************************************************** > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stdake at cisco.com Sat Oct 15 19:09:47 2016 From: stdake at cisco.com (Steven Dake (stdake)) Date: Sat, 15 Oct 2016 19:09:47 +0000 Subject: [Openstack-operators] [kolla] Heka deprecated in Kolla Message-ID: <03F3E631-6703-433E-B1AB-8C1DFD3B92D9@cisco.com> Michal had asked me if we were good to proceed with this deprecation. As there are no objections in the mailing list or review, we have decided to proceed with this (hopefully) operator-invisible change. Regards -steve On 10/3/16, 1:32 PM, "Micha? Jastrz?bski" wrote: >Hello, > >Kolla deprecates Heka in Ocata cycle, because Mozilla doesn't support >this project any more. During Ocata cycle we will prepare migration >plan and alternative to fill this missing functionality. As part of >N->O cycle we will migrate Heka to alternative we will decide upon in >following release. Our goal is to make this migration totally >transparent and logging infrastructure (which heka was part of) will >keep working in same way on higher levels (elastic, kibana). > >Regards, >Michal > >_______________________________________________ >OpenStack-operators mailing list >OpenStack-operators at lists.openstack.org >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators From richardwoo2003 at gmail.com Sun Oct 16 02:17:36 2016 From: richardwoo2003 at gmail.com (Richard Woo) Date: Sat, 15 Oct 2016 22:17:36 -0400 Subject: [Openstack-operators] Fwd: [neutron] upgrade from liberty to mitaka In-Reply-To: References: Message-ID: Hello, folks, I have a small cluster running openstack liberty, today I am starting to upgrading to Mitaka. I am having a problem to launch l3 agent on network node. I got the following error: Guru mediation now registers SIGUSR1 and SIGUSR2 by default for backward compatibility. SIGUSR1 will no longer be registered in a future release, so please use SIGUSR2 to generate reports. Option "verbose" from group "DEFAULT" is deprecated for removal. Its value may be silently ignored in the future. 2016-10-15 22:13:20.800 24466 INFO neutron.common.config [-] Logging enabled! 2016-10-15 22:13:20.801 24466 INFO neutron.common.config [-] /usr/bin/neutron-l3-agent version 8.2.0 2016-10-15 22:13:20.801 24466 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-l3-agent --config-file=/etc/neutron/neutron.conf --config-file=/etc/neutron/l3_agent.ini --log-file=/var/log/neutron/l3-agent.log setup_logging /usr/lib/python2.7/dist-packages/neutron/common/config.py:269 2016-10-15 22:13:20.923 24466 DEBUG oslo_messaging._drivers.amqpdriver [req-8c392909-20ed-45f4-b997-9302008d0075 - - - - -] CALL msg_id: 0b6eb35b95ec4edca605b2d3c6a76d37 exchange 'neutron' topic 'q-l3-plugin' _send /usr/lib/python2.7/dist-packages/oslo_messaging/_ drivers/amqpdriver.py:454 /usr/lib/python2.7/dist-packages/pkg_resources/__init__.py:187: RuntimeWarning: You have iterated over the result of pkg_resources.parse_version. This is a legacy behavior which is inconsistent with the new version class introduced in setuptools 8.0. In most cases, conversion to a tuple is unnecessary. For comparison of versions, sort the Version instances directly. If you have another use case requiring the tuple, please file a bug with the setuptools project describing that need. stacklevel=1, 2016-10-15 22:13:20.951 24466 DEBUG oslo_messaging._drivers.amqpdriver [-] received reply msg_id: 0b6eb35b95ec4edca605b2d3c6a76d37 __call__ /usr/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:302 2016-10-15 22:13:20.953 24466 DEBUG neutron.callbacks.manager [req-8c392909-20ed-45f4-b997-9302008d0075 - - - - -] Subscribe: router after_create subscribe /usr/lib/python2.7/dist-packages/neutron/callbacks/manager.py:41 2016-10-15 22:13:20.954 24466 DEBUG neutron.callbacks.manager [req-8c392909-20ed-45f4-b997-9302008d0075 - - - - -] Subscribe: router before_delete subscribe /usr/lib/python2.7/dist-packages/neutron/callbacks/manager.py:41 2016-10-15 22:13:20.955 24466 DEBUG neutron_fwaas.services. firewall.agents.l3reference.firewall_l3_agent [req-8c392909-20ed-45f4-b997-9302008d0075 - - - - -] Initializing firewall agent __init__ /usr/lib/python2.7/dist- packages/neutron_fwaas/services/firewall/agents/ l3reference/firewall_l3_agent.py:55 2016-10-15 22:13:20.956 24466 CRITICAL neutron [req-8c392909-20ed-45f4-b997-9302008d0075 - - - - -] AttributeError: 'module' object has no attribute 'FIREWALL_PLUGIN' 2016-10-15 22:13:20.956 24466 ERROR neutron Traceback (most recent call last): 2016-10-15 22:13:20.956 24466 ERROR neutron File "/usr/bin/neutron-l3-agent", line 10, in 2016-10-15 22:13:20.956 24466 ERROR neutron sys.exit(main()) 2016-10-15 22:13:20.956 24466 ERROR neutron File "/usr/lib/python2.7/dist- packages/neutron/cmd/eventlet/agents/l3.py", line 17, in main 2016-10-15 22:13:20.956 24466 ERROR neutron l3_agent.main() 2016-10-15 22:13:20.956 24466 ERROR neutron File "/usr/lib/python2.7/dist- packages/neutron/agent/l3_agent.py", line 57, in main 2016-10-15 22:13:20.956 24466 ERROR neutron manager=manager) 2016-10-15 22:13:20.956 24466 ERROR neutron File "/usr/lib/python2.7/dist-packages/neutron/service.py", line 331, in create 2016-10-15 22:13:20.956 24466 ERROR neutron periodic_fuzzy_delay=periodic_fuzzy_delay) 2016-10-15 22:13:20.956 24466 ERROR neutron File "/usr/lib/python2.7/dist-packages/neutron/service.py", line 264, in __init__ 2016-10-15 22:13:20.956 24466 ERROR neutron self.manager = manager_class(host=host, *args, **kwargs) 2016-10-15 22:13:20.956 24466 ERROR neutron File "/usr/lib/python2.7/dist- packages/neutron/agent/l3/agent.py", line 635, in __init__ 2016-10-15 22:13:20.956 24466 ERROR neutron super(L3NATAgentWithStateReport, self).__init__(host=host, conf=conf) 2016-10-15 22:13:20.956 24466 ERROR neutron File "/usr/lib/python2.7/dist- packages/neutron/agent/l3/agent.py", line 243, in __init__ 2016-10-15 22:13:20.956 24466 ERROR neutron super(L3NATAgent, self).__init__(conf=self.conf) 2016-10-15 22:13:20.956 24466 ERROR neutron File "/usr/lib/python2.7/dist- packages/neutron_fwaas/services/firewall/agents/ l3reference/firewall_l3_agent.py", line 77, in __init__ 2016-10-15 22:13:20.956 24466 ERROR neutron self.fwplugin_rpc = FWaaSL3PluginApi(topics.FIREWALL_PLUGIN, 2016-10-15 22:13:20.956 24466 ERROR neutron AttributeError: 'module' object has no attribute 'FIREWALL_PLUGIN' 2016-10-15 22:13:20.956 24466 ERROR neutron On our setup, we did not use any FWaaS service, please give us hints what may cause this problem. Thanks, Richard -------------- next part -------------- An HTML attachment was scrubbed... URL: From ihrachys at redhat.com Sun Oct 16 07:36:21 2016 From: ihrachys at redhat.com (Ihar Hrachyshka) Date: Sun, 16 Oct 2016 09:36:21 +0200 Subject: [Openstack-operators] [openstack-dev] [neutron] upgrade from liberty to mitaka In-Reply-To: References: Message-ID: <5884944F-FC18-4483-A624-5DE2D0D3F25F@redhat.com> Richard Woo wrote: > Hello, folks, > > I have a small cluster running openstack liberty, today I am starting to > upgrading to Mitaka. > > I am having a problem to launch l3 agent on network node. > > I got the following error: > Guru mediation now registers SIGUSR1 and SIGUSR2 by default for backward > compatibility. SIGUSR1 will no longer be registered in a future release, > so please use SIGUSR2 to generate reports. > Option "verbose" from group "DEFAULT" is deprecated for removal. Its > value may be silently ignored in the future. > 2016-10-15 22:13:20.800 24466 INFO neutron.common.config [-] Logging > enabled! > 2016-10-15 22:13:20.801 24466 INFO neutron.common.config [-] > /usr/bin/neutron-l3-agent version 8.2.0 > 2016-10-15 22:13:20.801 24466 DEBUG neutron.common.config [-] command > line: /usr/bin/neutron-l3-agent --config-file=/etc/neutron/neutron.conf > --config-file=/etc/neutron/l3_agent.ini > --log-file=/var/log/neutron/l3-agent.log setup_logging > /usr/lib/python2.7/dist-packages/neutron/common/config.py:269 > 2016-10-15 22:13:20.923 24466 DEBUG oslo_messaging._drivers.amqpdriver > [req-8c392909-20ed-45f4-b997-9302008d0075 - - - - -] CALL msg_id: > 0b6eb35b95ec4edca605b2d3c6a76d37 exchange 'neutron' topic 'q-l3-plugin' > _send > /usr/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:454 > /usr/lib/python2.7/dist-packages/pkg_resources/__init__.py:187: > RuntimeWarning: You have iterated over the result of > pkg_resources.parse_version. This is a legacy behavior which is > inconsistent with the new version class introduced in setuptools 8.0. In > most cases, conversion to a tuple is unnecessary. For comparison of > versions, sort the Version instances directly. If you have another use > case requiring the tuple, please file a bug with the setuptools project > describing that need. > stacklevel=1, > 2016-10-15 22:13:20.951 24466 DEBUG oslo_messaging._drivers.amqpdriver > [-] received reply msg_id: 0b6eb35b95ec4edca605b2d3c6a76d37 __call__ > /usr/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:302 > 2016-10-15 22:13:20.953 24466 DEBUG neutron.callbacks.manager > [req-8c392909-20ed-45f4-b997-9302008d0075 - - - - -] Subscribe: after_router_added at 0x7fa2b1a82050> router after_create subscribe > /usr/lib/python2.7/dist-packages/neutron/callbacks/manager.py:41 > 2016-10-15 22:13:20.954 24466 DEBUG neutron.callbacks.manager > [req-8c392909-20ed-45f4-b997-9302008d0075 - - - - -] Subscribe: before_router_removed at 0x7fa2b1a86230> router before_delete subscribe > /usr/lib/python2.7/dist-packages/neutron/callbacks/manager.py:41 > 2016-10-15 22:13:20.955 24466 DEBUG > neutron_fwaas.services.firewall.agents.l3reference.firewall_l3_agent > [req-8c392909-20ed-45f4-b997-9302008d0075 - - - - -] Initializing > firewall agent __init__ > /usr/lib/python2.7/dist-packages/neutron_fwaas/services/firewall/agents/l3reference/firewall_l3_agent.py:55 > 2016-10-15 22:13:20.956 24466 CRITICAL neutron > [req-8c392909-20ed-45f4-b997-9302008d0075 - - - - -] AttributeError: > 'module' object has no attribute 'FIREWALL_PLUGIN' > 2016-10-15 22:13:20.956 24466 ERROR neutron Traceback (most recent call > last): > 2016-10-15 22:13:20.956 24466 ERROR neutron File > "/usr/bin/neutron-l3-agent", line 10, in > 2016-10-15 22:13:20.956 24466 ERROR neutron sys.exit(main()) > 2016-10-15 22:13:20.956 24466 ERROR neutron File > "/usr/lib/python2.7/dist-packages/neutron/cmd/eventlet/agents/l3.py", > line 17, in main > 2016-10-15 22:13:20.956 24466 ERROR neutron l3_agent.main() > 2016-10-15 22:13:20.956 24466 ERROR neutron File > "/usr/lib/python2.7/dist-packages/neutron/agent/l3_agent.py", line 57, in > main > 2016-10-15 22:13:20.956 24466 ERROR neutron manager=manager) > 2016-10-15 22:13:20.956 24466 ERROR neutron File > "/usr/lib/python2.7/dist-packages/neutron/service.py", line 331, in create > 2016-10-15 22:13:20.956 24466 ERROR neutron > periodic_fuzzy_delay=periodic_fuzzy_delay) > 2016-10-15 22:13:20.956 24466 ERROR neutron File > "/usr/lib/python2.7/dist-packages/neutron/service.py", line 264, in > __init__ > 2016-10-15 22:13:20.956 24466 ERROR neutron self.manager = > manager_class(host=host, *args, **kwargs) > 2016-10-15 22:13:20.956 24466 ERROR neutron File > "/usr/lib/python2.7/dist-packages/neutron/agent/l3/agent.py", line 635, > in __init__ > 2016-10-15 22:13:20.956 24466 ERROR neutron > super(L3NATAgentWithStateReport, self).__init__(host=host, conf=conf) > 2016-10-15 22:13:20.956 24466 ERROR neutron File > "/usr/lib/python2.7/dist-packages/neutron/agent/l3/agent.py", line 243, > in __init__ > 2016-10-15 22:13:20.956 24466 ERROR neutron super(L3NATAgent, > self).__init__(conf=self.conf) > 2016-10-15 22:13:20.956 24466 ERROR neutron File > "/usr/lib/python2.7/dist-packages/neutron_fwaas/services/firewall/agents/l3reference/firewall_l3_agent.py", > line 77, in __init__ > 2016-10-15 22:13:20.956 24466 ERROR neutron self.fwplugin_rpc = > FWaaSL3PluginApi(topics.FIREWALL_PLUGIN, > 2016-10-15 22:13:20.956 24466 ERROR neutron AttributeError: 'module' > object has no attribute 'FIREWALL_PLUGIN' > 2016-10-15 22:13:20.956 24466 ERROR neutron > > On our setup, we did not use any FWaaS service, please give us hints what > may cause this problem. First, if you don?t use FWaaS, then why have you installed neutron-fwaas package? Please remove it. Also, make sure that your l3_agent.ini does not contain any fwaas remnants (I suspect you may have [fwaas] enabled = True there). Finally, check that neutron-server does not attempt to load fwaas service plugin (you can check it in log messages from oslo.config that are dumped at the service startup, specifically, check service_plugins value). Ihar From richardwoo2003 at gmail.com Sun Oct 16 16:00:29 2016 From: richardwoo2003 at gmail.com (Richard Woo) Date: Sun, 16 Oct 2016 12:00:29 -0400 Subject: [Openstack-operators] [openstack-dev] [neutron] upgrade from liberty to mitaka In-Reply-To: <5884944F-FC18-4483-A624-5DE2D0D3F25F@redhat.com> References: <5884944F-FC18-4483-A624-5DE2D0D3F25F@redhat.com> Message-ID: Ihar, Thanks for your reply, seems like the fwaas is install when installed neutron server: root at controller-01:~# dpkg -l | grep neutron ii neutron-common 2:8.2.0-0ubuntu1~cloud0 all Neutron is a virtual network service for Openstack - common ii neutron-plugin-ml2 2:8.2.0-0ubuntu1~cloud0 all Neutron is a virtual network service for Openstack - ML2 plugin ii neutron-server 2:8.2.0-0ubuntu1~cloud0 all Neutron is a virtual network service for Openstack - server ii python-neutron 2:8.2.0-0ubuntu1~cloud0 all Neutron is a virtual network service for Openstack - Python library ii python-neutron-fwaas 1:7.1.1-0ubuntu1~cloud0 all Firewall-as-a-Service driver for OpenStack Neutron ii python-neutron-lib 0.0.2-2~cloud0 all Neutron shared routines and utilities - Python 2.7 ii python-neutronclient 1:3.1.0-0ubuntu1~cloud0 all client API library for Neutron root at controller-01:~# apt-get remove python-neutron-fwaas Reading package lists... Done Building dependency tree Reading state information... Done The following packages were automatically installed and are no longer required: ipset libipset3 libmnl0 python-neutron python-neutron-lib python-openvswitch Use 'apt-get autoremove' to remove them. The following packages will be REMOVED: neutron-common neutron-plugin-ml2 neutron-server python-neutron-fwaas 0 upgraded, 0 newly installed, 4 to remove and 30 not upgraded. After this operation, 1,149 kB disk space will be freed. Do you want to continue? [Y/n] n Abort. root at controller-01:~# I checked service_plugins value in 'neutron.conf', 'router' is enabled but not fwaas: # The core plugin Neutron will use (string value) core_plugin = ml2 # The service plugins Neutron will use (list value) service_plugins = router Richard On Sun, Oct 16, 2016 at 3:36 AM, Ihar Hrachyshka wrote: > Richard Woo wrote: > > Hello, folks, >> >> I have a small cluster running openstack liberty, today I am starting to >> upgrading to Mitaka. >> >> I am having a problem to launch l3 agent on network node. >> >> I got the following error: >> Guru mediation now registers SIGUSR1 and SIGUSR2 by default for backward >> compatibility. SIGUSR1 will no longer be registered in a future release, so >> please use SIGUSR2 to generate reports. >> Option "verbose" from group "DEFAULT" is deprecated for removal. Its >> value may be silently ignored in the future. >> 2016-10-15 22:13:20.800 24466 INFO neutron.common.config [-] Logging >> enabled! >> 2016-10-15 22:13:20.801 24466 INFO neutron.common.config [-] >> /usr/bin/neutron-l3-agent version 8.2.0 >> 2016-10-15 22:13:20.801 24466 DEBUG neutron.common.config [-] command >> line: /usr/bin/neutron-l3-agent --config-file=/etc/neutron/neutron.conf >> --config-file=/etc/neutron/l3_agent.ini --log-file=/var/log/neutron/l3-agent.log >> setup_logging /usr/lib/python2.7/dist-packages/neutron/common/config.py: >> 269 >> 2016-10-15 22:13:20.923 24466 DEBUG oslo_messaging._drivers.amqpdriver >> [req-8c392909-20ed-45f4-b997-9302008d0075 - - - - -] CALL msg_id: >> 0b6eb35b95ec4edca605b2d3c6a76d37 exchange 'neutron' topic 'q-l3-plugin' >> _send /usr/lib/python2.7/dist-packages/oslo_messaging/_drivers/ >> amqpdriver.py:454 >> /usr/lib/python2.7/dist-packages/pkg_resources/__init__.py:187: >> RuntimeWarning: You have iterated over the result of >> pkg_resources.parse_version. This is a legacy behavior which is >> inconsistent with the new version class introduced in setuptools 8.0. In >> most cases, conversion to a tuple is unnecessary. For comparison of >> versions, sort the Version instances directly. If you have another use case >> requiring the tuple, please file a bug with the setuptools project >> describing that need. >> stacklevel=1, >> 2016-10-15 22:13:20.951 24466 DEBUG oslo_messaging._drivers.amqpdriver >> [-] received reply msg_id: 0b6eb35b95ec4edca605b2d3c6a76d37 __call__ >> /usr/lib/python2.7/dist-packages/oslo_messaging/_drivers/ >> amqpdriver.py:302 >> 2016-10-15 22:13:20.953 24466 DEBUG neutron.callbacks.manager >> [req-8c392909-20ed-45f4-b997-9302008d0075 - - - - -] Subscribe: >> router after_create >> subscribe /usr/lib/python2.7/dist-packages/neutron/callbacks/manager. >> py:41 >> 2016-10-15 22:13:20.954 24466 DEBUG neutron.callbacks.manager >> [req-8c392909-20ed-45f4-b997-9302008d0075 - - - - -] Subscribe: >> router before_delete >> subscribe /usr/lib/python2.7/dist-packages/neutron/callbacks/manager. >> py:41 >> 2016-10-15 22:13:20.955 24466 DEBUG neutron_fwaas.services.firewal >> l.agents.l3reference.firewall_l3_agent [req-8c392909-20ed-45f4-b997-9302008d0075 >> - - - - -] Initializing firewall agent __init__ >> /usr/lib/python2.7/dist-packages/neutron_fwaas/services/ >> firewall/agents/l3reference/firewall_l3_agent.py:55 >> 2016-10-15 22:13:20.956 24466 CRITICAL neutron >> [req-8c392909-20ed-45f4-b997-9302008d0075 - - - - -] AttributeError: >> 'module' object has no attribute 'FIREWALL_PLUGIN' >> 2016-10-15 22:13:20.956 24466 ERROR neutron Traceback (most recent call >> last): >> 2016-10-15 22:13:20.956 24466 ERROR neutron File >> "/usr/bin/neutron-l3-agent", line 10, in >> 2016-10-15 22:13:20.956 24466 ERROR neutron sys.exit(main()) >> 2016-10-15 22:13:20.956 24466 ERROR neutron File >> "/usr/lib/python2.7/dist-packages/neutron/cmd/eventlet/agents/l3.py", >> line 17, in main >> 2016-10-15 22:13:20.956 24466 ERROR neutron l3_agent.main() >> 2016-10-15 22:13:20.956 24466 ERROR neutron File >> "/usr/lib/python2.7/dist-packages/neutron/agent/l3_agent.py", line 57, >> in main >> 2016-10-15 22:13:20.956 24466 ERROR neutron manager=manager) >> 2016-10-15 22:13:20.956 24466 ERROR neutron File >> "/usr/lib/python2.7/dist-packages/neutron/service.py", line 331, in >> create >> 2016-10-15 22:13:20.956 24466 ERROR neutron periodic_fuzzy_delay= >> periodic_fuzzy_delay) >> 2016-10-15 22:13:20.956 24466 ERROR neutron File >> "/usr/lib/python2.7/dist-packages/neutron/service.py", line 264, in >> __init__ >> 2016-10-15 22:13:20.956 24466 ERROR neutron self.manager = >> manager_class(host=host, *args, **kwargs) >> 2016-10-15 22:13:20.956 24466 ERROR neutron File >> "/usr/lib/python2.7/dist-packages/neutron/agent/l3/agent.py", line 635, >> in __init__ >> 2016-10-15 22:13:20.956 24466 ERROR neutron >> super(L3NATAgentWithStateReport, self).__init__(host=host, conf=conf) >> 2016-10-15 22:13:20.956 24466 ERROR neutron File >> "/usr/lib/python2.7/dist-packages/neutron/agent/l3/agent.py", line 243, >> in __init__ >> 2016-10-15 22:13:20.956 24466 ERROR neutron super(L3NATAgent, >> self).__init__(conf=self.conf) >> 2016-10-15 22:13:20.956 24466 ERROR neutron File >> "/usr/lib/python2.7/dist-packages/neutron_fwaas/services/ >> firewall/agents/l3reference/firewall_l3_agent.py", line 77, in __init__ >> 2016-10-15 22:13:20.956 24466 ERROR neutron self.fwplugin_rpc = >> FWaaSL3PluginApi(topics.FIREWALL_PLUGIN, >> 2016-10-15 22:13:20.956 24466 ERROR neutron AttributeError: 'module' >> object has no attribute 'FIREWALL_PLUGIN' >> 2016-10-15 22:13:20.956 24466 ERROR neutron >> >> On our setup, we did not use any FWaaS service, please give us hints what >> may cause this problem. >> > > First, if you don?t use FWaaS, then why have you installed neutron-fwaas > package? Please remove it. Also, make sure that your l3_agent.ini does not > contain any fwaas remnants (I suspect you may have [fwaas] enabled = True > there). Finally, check that neutron-server does not attempt to load fwaas > service plugin (you can check it in log messages from oslo.config that are > dumped at the service startup, specifically, check service_plugins value). > > Ihar > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From richardwoo2003 at gmail.com Sun Oct 16 16:20:22 2016 From: richardwoo2003 at gmail.com (Richard Woo) Date: Sun, 16 Oct 2016 12:20:22 -0400 Subject: [Openstack-operators] [openstack-dev] [neutron] upgrade from liberty to mitaka In-Reply-To: References: <5884944F-FC18-4483-A624-5DE2D0D3F25F@redhat.com> Message-ID: Ihar, After looking at version of python-neutron-fwaas package, I found it is still the old one. I upgraded this package to mitaka version, the problem went away. Thanks for your help. Richard On Sun, Oct 16, 2016 at 12:00 PM, Richard Woo wrote: > Ihar, > > Thanks for your reply, seems like the fwaas is install when installed > neutron server: > > root at controller-01:~# dpkg -l | grep neutron > ii neutron-common 2:8.2.0-0ubuntu1~cloud0 > all Neutron is a virtual network service for Openstack - common > ii neutron-plugin-ml2 2:8.2.0-0ubuntu1~cloud0 > all Neutron is a virtual network service for Openstack - ML2 > plugin > ii neutron-server 2:8.2.0-0ubuntu1~cloud0 > all Neutron is a virtual network service for Openstack - server > ii python-neutron 2:8.2.0-0ubuntu1~cloud0 > all Neutron is a virtual network service for Openstack - > Python library > ii python-neutron-fwaas 1:7.1.1-0ubuntu1~cloud0 > all Firewall-as-a-Service driver for OpenStack Neutron > ii python-neutron-lib 0.0.2-2~cloud0 > all Neutron shared routines and utilities - Python 2.7 > ii python-neutronclient 1:3.1.0-0ubuntu1~cloud0 > all client API library for Neutron > root at controller-01:~# apt-get remove python-neutron-fwaas > Reading package lists... Done > Building dependency tree > Reading state information... Done > The following packages were automatically installed and are no longer > required: > ipset libipset3 libmnl0 python-neutron python-neutron-lib > python-openvswitch > Use 'apt-get autoremove' to remove them. > The following packages will be REMOVED: > neutron-common neutron-plugin-ml2 neutron-server python-neutron-fwaas > 0 upgraded, 0 newly installed, 4 to remove and 30 not upgraded. > After this operation, 1,149 kB disk space will be freed. > Do you want to continue? [Y/n] n > Abort. > root at controller-01:~# > > I checked service_plugins value in 'neutron.conf', 'router' is enabled but > not fwaas: > > # The core plugin Neutron will use (string value) > core_plugin = ml2 > > # The service plugins Neutron will use (list value) > service_plugins = router > > > Richard > > > On Sun, Oct 16, 2016 at 3:36 AM, Ihar Hrachyshka > wrote: > >> Richard Woo wrote: >> >> Hello, folks, >>> >>> I have a small cluster running openstack liberty, today I am starting to >>> upgrading to Mitaka. >>> >>> I am having a problem to launch l3 agent on network node. >>> >>> I got the following error: >>> Guru mediation now registers SIGUSR1 and SIGUSR2 by default for backward >>> compatibility. SIGUSR1 will no longer be registered in a future release, so >>> please use SIGUSR2 to generate reports. >>> Option "verbose" from group "DEFAULT" is deprecated for removal. Its >>> value may be silently ignored in the future. >>> 2016-10-15 22:13:20.800 24466 INFO neutron.common.config [-] Logging >>> enabled! >>> 2016-10-15 22:13:20.801 24466 INFO neutron.common.config [-] >>> /usr/bin/neutron-l3-agent version 8.2.0 >>> 2016-10-15 22:13:20.801 24466 DEBUG neutron.common.config [-] command >>> line: /usr/bin/neutron-l3-agent --config-file=/etc/neutron/neutron.conf >>> --config-file=/etc/neutron/l3_agent.ini --log-file=/var/log/neutron/l3-agent.log >>> setup_logging /usr/lib/python2.7/dist-packag >>> es/neutron/common/config.py:269 >>> 2016-10-15 22:13:20.923 24466 DEBUG oslo_messaging._drivers.amqpdriver >>> [req-8c392909-20ed-45f4-b997-9302008d0075 - - - - -] CALL msg_id: >>> 0b6eb35b95ec4edca605b2d3c6a76d37 exchange 'neutron' topic 'q-l3-plugin' >>> _send /usr/lib/python2.7/dist-packages/oslo_messaging/_drivers/amq >>> pdriver.py:454 >>> /usr/lib/python2.7/dist-packages/pkg_resources/__init__.py:187: >>> RuntimeWarning: You have iterated over the result of >>> pkg_resources.parse_version. This is a legacy behavior which is >>> inconsistent with the new version class introduced in setuptools 8.0. In >>> most cases, conversion to a tuple is unnecessary. For comparison of >>> versions, sort the Version instances directly. If you have another use case >>> requiring the tuple, please file a bug with the setuptools project >>> describing that need. >>> stacklevel=1, >>> 2016-10-15 22:13:20.951 24466 DEBUG oslo_messaging._drivers.amqpdriver >>> [-] received reply msg_id: 0b6eb35b95ec4edca605b2d3c6a76d37 __call__ >>> /usr/lib/python2.7/dist-packages/oslo_messaging/_drivers/amq >>> pdriver.py:302 >>> 2016-10-15 22:13:20.953 24466 DEBUG neutron.callbacks.manager >>> [req-8c392909-20ed-45f4-b997-9302008d0075 - - - - -] Subscribe: >>> router after_create >>> subscribe /usr/lib/python2.7/dist-packages/neutron/callbacks/manager.p >>> y:41 >>> 2016-10-15 22:13:20.954 24466 DEBUG neutron.callbacks.manager >>> [req-8c392909-20ed-45f4-b997-9302008d0075 - - - - -] Subscribe: >>> router before_delete >>> subscribe /usr/lib/python2.7/dist-packages/neutron/callbacks/manager.p >>> y:41 >>> 2016-10-15 22:13:20.955 24466 DEBUG neutron_fwaas.services.firewal >>> l.agents.l3reference.firewall_l3_agent [req-8c392909-20ed-45f4-b997-9302008d0075 >>> - - - - -] Initializing firewall agent __init__ >>> /usr/lib/python2.7/dist-packages/neutron_fwaas/services/fire >>> wall/agents/l3reference/firewall_l3_agent.py:55 >>> 2016-10-15 22:13:20.956 24466 CRITICAL neutron >>> [req-8c392909-20ed-45f4-b997-9302008d0075 - - - - -] AttributeError: >>> 'module' object has no attribute 'FIREWALL_PLUGIN' >>> 2016-10-15 22:13:20.956 24466 ERROR neutron Traceback (most recent call >>> last): >>> 2016-10-15 22:13:20.956 24466 ERROR neutron File >>> "/usr/bin/neutron-l3-agent", line 10, in >>> 2016-10-15 22:13:20.956 24466 ERROR neutron sys.exit(main()) >>> 2016-10-15 22:13:20.956 24466 ERROR neutron File >>> "/usr/lib/python2.7/dist-packages/neutron/cmd/eventlet/agents/l3.py", >>> line 17, in main >>> 2016-10-15 22:13:20.956 24466 ERROR neutron l3_agent.main() >>> 2016-10-15 22:13:20.956 24466 ERROR neutron File >>> "/usr/lib/python2.7/dist-packages/neutron/agent/l3_agent.py", line 57, >>> in main >>> 2016-10-15 22:13:20.956 24466 ERROR neutron manager=manager) >>> 2016-10-15 22:13:20.956 24466 ERROR neutron File >>> "/usr/lib/python2.7/dist-packages/neutron/service.py", line 331, in >>> create >>> 2016-10-15 22:13:20.956 24466 ERROR neutron >>> periodic_fuzzy_delay=periodic_fuzzy_delay) >>> 2016-10-15 22:13:20.956 24466 ERROR neutron File >>> "/usr/lib/python2.7/dist-packages/neutron/service.py", line 264, in >>> __init__ >>> 2016-10-15 22:13:20.956 24466 ERROR neutron self.manager = >>> manager_class(host=host, *args, **kwargs) >>> 2016-10-15 22:13:20.956 24466 ERROR neutron File >>> "/usr/lib/python2.7/dist-packages/neutron/agent/l3/agent.py", line 635, >>> in __init__ >>> 2016-10-15 22:13:20.956 24466 ERROR neutron >>> super(L3NATAgentWithStateReport, self).__init__(host=host, conf=conf) >>> 2016-10-15 22:13:20.956 24466 ERROR neutron File >>> "/usr/lib/python2.7/dist-packages/neutron/agent/l3/agent.py", line 243, >>> in __init__ >>> 2016-10-15 22:13:20.956 24466 ERROR neutron super(L3NATAgent, >>> self).__init__(conf=self.conf) >>> 2016-10-15 22:13:20.956 24466 ERROR neutron File >>> "/usr/lib/python2.7/dist-packages/neutron_fwaas/services/fir >>> ewall/agents/l3reference/firewall_l3_agent.py", line 77, in __init__ >>> 2016-10-15 22:13:20.956 24466 ERROR neutron self.fwplugin_rpc = >>> FWaaSL3PluginApi(topics.FIREWALL_PLUGIN, >>> 2016-10-15 22:13:20.956 24466 ERROR neutron AttributeError: 'module' >>> object has no attribute 'FIREWALL_PLUGIN' >>> 2016-10-15 22:13:20.956 24466 ERROR neutron >>> >>> On our setup, we did not use any FWaaS service, please give us hints >>> what may cause this problem. >>> >> >> First, if you don?t use FWaaS, then why have you installed neutron-fwaas >> package? Please remove it. Also, make sure that your l3_agent.ini does not >> contain any fwaas remnants (I suspect you may have [fwaas] enabled = True >> there). Finally, check that neutron-server does not attempt to load fwaas >> service plugin (you can check it in log messages from oslo.config that are >> dumped at the service startup, specifically, check service_plugins value). >> >> Ihar >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ihrachys at redhat.com Sun Oct 16 18:01:33 2016 From: ihrachys at redhat.com (Ihar Hrachyshka) Date: Sun, 16 Oct 2016 20:01:33 +0200 Subject: [Openstack-operators] [openstack-dev] [neutron] upgrade from liberty to mitaka In-Reply-To: References: <5884944F-FC18-4483-A624-5DE2D0D3F25F@redhat.com> Message-ID: <75630E3E-98FF-4170-A238-96C0066FC961@redhat.com> Richard Woo wrote: > Ihar, > > Thanks for your reply, seems like the fwaas is install when installed > neutron server: > > root at controller-01:~# dpkg -l | grep neutron > ii neutron-common > 2:8.2.0-0ubuntu1~cloud0 all Neutron is a virtual > network service for Openstack - common > ii neutron-plugin-ml2 > 2:8.2.0-0ubuntu1~cloud0 all Neutron is a virtual > network service for Openstack - ML2 plugin > ii neutron-server > 2:8.2.0-0ubuntu1~cloud0 all Neutron is a virtual > network service for Openstack - server > ii python-neutron > 2:8.2.0-0ubuntu1~cloud0 all Neutron is a virtual > network service for Openstack - Python library > ii python-neutron-fwaas > 1:7.1.1-0ubuntu1~cloud0 all Firewall-as-a-Service > driver for OpenStack Neutron > ii python-neutron-lib > 0.0.2-2~cloud0 all Neutron shared > routines and utilities - Python 2.7 > ii python-neutronclient > 1:3.1.0-0ubuntu1~cloud0 all client API library for > Neutron > root at controller-01:~# apt-get remove python-neutron-fwaas > Reading package lists... Done > Building dependency tree > Reading state information... Done > The following packages were automatically installed and are no longer > required: > ipset libipset3 libmnl0 python-neutron python-neutron-lib python-openvswitch > Use 'apt-get autoremove' to remove them. > The following packages will be REMOVED: > neutron-common neutron-plugin-ml2 neutron-server python-neutron-fwaas > 0 upgraded, 0 newly installed, 4 to remove and 30 not upgraded. > After this operation, 1,149 kB disk space will be freed. > Do you want to continue? [Y/n] n > Abort. > root at controller-01:~# I know you solved your issue already, but I just want to comment on the packaging behaviour you see. I don?t believe that your distribution set dependencies right: you should be able to install your system without fwaas code at all, it?s meant to be a plugin, not a dependency for neutron-server. Now, it may be the case that due to defaults chosen by debian packages default configuration files for the neutron-server service do not work unless you also pull in neutron-fwaas. Maybe it?s something historical. But ideally, you should be able to run a bare neutron-server with just the code from openstack/neutron repo. Ihar From tom at openstack.org Mon Oct 17 03:58:52 2016 From: tom at openstack.org (Tom Fifield) Date: Mon, 17 Oct 2016 11:58:52 +0800 Subject: [Openstack-operators] Ops Meetups Team - Next Meeting Coordinates and News Message-ID: <1b44d68d-2f44-ddd1-ae0e-cab39a3430f8@openstack.org> Hi Ops Meetups Team, Woo! Less than a week to go to Barcelona. Let's have a quick meeting to finalise everything. ==Next Meeting== So, the next meeting is at: ==> Tuesday, 18 of Oct at 1400 UTC[1] in #openstack-operators [2] will be kept up to date with information about the meeting time and agenda ===Barcelona Meeting=== Our in-person meeting in Barcelona is on the schedule: https://www.openstack.org/summit/barcelona-2016/summit-schedule/events/17413/ops-ops-meetups-team Tuesday, October 25, 5:55pm-6:35pm AC Hotel - P3 - Montjuic All welcome! Regards, Tom [1] http://www.timeanddate.com/worldclock/fixedtime.html?msg=Ops+Meetups+Team&iso=20161018T22&p1=241 [2] https://wiki.openstack.org/wiki/Ops_Meetups_Team#Meeting_Information From tom at openstack.org Mon Oct 17 04:58:27 2016 From: tom at openstack.org (Tom Fifield) Date: Mon, 17 Oct 2016 12:58:27 +0800 Subject: [Openstack-operators] Ops@Barcelona - Updates Message-ID: <5f090efa-d754-939d-4b99-ec16353318ec@openstack.org> Hi all, There have been some schedule updates for the Ops design summit sessions: https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=Ops+Summit%3A New Sessions added: * Ops Meetups Team * Some working groups not previously listed * Horizon: Operator and Plugin Author Feedback * Neutron: End user and operator feedback * Barbican: User and Operator Feedback Session and some minor room and time changes too - please doublecheck your schedule! ** Call for Moderators ** We really need a moderator for: >> * HAProy, MySQL, Rabbit Tuning since it looks like it will be one of the most popular sessions, but we don't have a moderator yet. These sessions will be canceled unless we can find a moderator: >> * Fleet Management >> * Swift >> * Horizon >> * Alt Deployment tech >> * ControlPlane Design(multi region) For those of you who want to know what it takes check out the Moderator's Guide: https://wiki.openstack.org/wiki/Operations/Meetups#Moderators_Guide & ask questions - we're here to help! Regards, Tom From me at not.mn Mon Oct 17 06:17:54 2016 From: me at not.mn (John Dickinson) Date: Sun, 16 Oct 2016 23:17:54 -0700 Subject: [Openstack-operators] Ops@Barcelona - Updates In-Reply-To: <5f090efa-d754-939d-4b99-ec16353318ec@openstack.org> References: <5f090efa-d754-939d-4b99-ec16353318ec@openstack.org> Message-ID: <348F1400-48B2-4344-A0E6-5ECBAD9453BC@not.mn> I'm happy to moderate the Swift session, but I cannot make it at 3:05. If it's switched with a session in the earlier slot (2:15), then I'll be happy to lead that one. If not, I may be able to find another moderator, but I won't be able to be there. --John On 16 Oct 2016, at 21:58, Tom Fifield wrote: > Hi all, > > There have been some schedule updates for the Ops design summit sessions: > > https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=Ops+Summit%3A > > New Sessions added: > * Ops Meetups Team > * Some working groups not previously listed > * Horizon: Operator and Plugin Author Feedback > * Neutron: End user and operator feedback > * Barbican: User and Operator Feedback Session > > and some minor room and time changes too - please doublecheck your schedule! > > ** Call for Moderators ** > > We really need a moderator for: > >> * HAProy, MySQL, Rabbit Tuning > > since it looks like it will be one of the most popular sessions, but we don't have a moderator yet. > > These sessions will be canceled unless we can find a moderator: > >> * Fleet Management > >> * Swift > >> * Horizon > >> * Alt Deployment tech > >> * ControlPlane Design(multi region) > > For those of you who want to know what it takes check out the Moderator's Guide: https://wiki.openstack.org/wiki/Operations/Meetups#Moderators_Guide & ask questions - we're here to help! > > Regards, > > Tom > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: OpenPGP digital signature URL: From tom at openstack.org Mon Oct 17 06:22:55 2016 From: tom at openstack.org (Tom Fifield) Date: Mon, 17 Oct 2016 14:22:55 +0800 Subject: [Openstack-operators] Ops@Barcelona - Updates In-Reply-To: <348F1400-48B2-4344-A0E6-5ECBAD9453BC@not.mn> References: <5f090efa-d754-939d-4b99-ec16353318ec@openstack.org> <348F1400-48B2-4344-A0E6-5ECBAD9453BC@not.mn> Message-ID: On 17/10/16 14:17, John Dickinson wrote: > I'm happy to moderate the Swift session, but I cannot make it at 3:05. If it's switched with a session in the earlier slot (2:15), then I'll be happy to lead that one. If not, I may be able to find another moderator, but I won't be able to be there. Done & Done! Thanks John :) https://www.openstack.org/summit/barcelona-2016/summit-schedule/events/17353/ops-swift-feedback > --John > > > > > On 16 Oct 2016, at 21:58, Tom Fifield wrote: > >> Hi all, >> >> There have been some schedule updates for the Ops design summit sessions: >> >> https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=Ops+Summit%3A >> >> New Sessions added: >> * Ops Meetups Team >> * Some working groups not previously listed >> * Horizon: Operator and Plugin Author Feedback >> * Neutron: End user and operator feedback >> * Barbican: User and Operator Feedback Session >> >> and some minor room and time changes too - please doublecheck your schedule! >> >> ** Call for Moderators ** >> >> We really need a moderator for: >> >> * HAProy, MySQL, Rabbit Tuning >> >> since it looks like it will be one of the most popular sessions, but we don't have a moderator yet. >> >> These sessions will be canceled unless we can find a moderator: >> >> * Fleet Management >> >> * Swift >> >> * Horizon >> >> * Alt Deployment tech >> >> * ControlPlane Design(multi region) >> >> For those of you who want to know what it takes check out the Moderator's Guide: https://wiki.openstack.org/wiki/Operations/Meetups#Moderators_Guide & ask questions - we're here to help! >> >> Regards, >> >> Tom >> >> _______________________________________________ >> OpenStack-operators mailing list >> OpenStack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators From mrhillsman at gmail.com Mon Oct 17 13:53:32 2016 From: mrhillsman at gmail.com (Melvin Hillsman) Date: Mon, 17 Oct 2016 08:53:32 -0500 Subject: [Openstack-operators] Ops@Barcelona - Updates In-Reply-To: References: <5f090efa-d754-939d-4b99-ec16353318ec@openstack.org> <348F1400-48B2-4344-A0E6-5ECBAD9453BC@not.mn> Message-ID: <0874d474-7902-5aff-af8d-d18fc03153c9@gmail.com> I was going to grab a few more sessions but have a talk on Tuesday @3:55. If someone wants to take Horizon which is during the same time of the talk, I can cover an earlier session. On 10/17/2016 01:22 AM, Tom Fifield wrote: > On 17/10/16 14:17, John Dickinson wrote: >> I'm happy to moderate the Swift session, but I cannot make it at >> 3:05. If it's switched with a session in the earlier slot (2:15), >> then I'll be happy to lead that one. If not, I may be able to find >> another moderator, but I won't be able to be there. > > Done & Done! Thanks John :) > > https://www.openstack.org/summit/barcelona-2016/summit-schedule/events/17353/ops-swift-feedback > > >> --John >> >> >> >> >> On 16 Oct 2016, at 21:58, Tom Fifield wrote: >> >>> Hi all, >>> >>> There have been some schedule updates for the Ops design summit >>> sessions: >>> >>> https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=Ops+Summit%3A >>> >>> >>> New Sessions added: >>> * Ops Meetups Team >>> * Some working groups not previously listed >>> * Horizon: Operator and Plugin Author Feedback >>> * Neutron: End user and operator feedback >>> * Barbican: User and Operator Feedback Session >>> >>> and some minor room and time changes too - please doublecheck your >>> schedule! >>> >>> ** Call for Moderators ** >>> >>> We really need a moderator for: >>> >> * HAProy, MySQL, Rabbit Tuning >>> >>> since it looks like it will be one of the most popular sessions, but >>> we don't have a moderator yet. >>> >>> These sessions will be canceled unless we can find a moderator: >>> >> * Fleet Management >>> >> * Swift >>> >> * Horizon >>> >> * Alt Deployment tech >>> >> * ControlPlane Design(multi region) >>> >>> For those of you who want to know what it takes check out the >>> Moderator's Guide: >>> https://wiki.openstack.org/wiki/Operations/Meetups#Moderators_Guide >>> & ask questions - we're here to help! >>> >>> Regards, >>> >>> Tom >>> >>> _______________________________________________ >>> OpenStack-operators mailing list >>> OpenStack-operators at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators -- Kind regards, -- Melvin Hillsman Ops Technical Lead OpenStack Innovation Center mrhillsman at gmail.com mobile: (210) 413-1659 office: (210) 312-1267 Learner | Ideation | Belief | Responsibility | Command http://osic.org From william.josefson at gmail.com Mon Oct 17 14:05:26 2016 From: william.josefson at gmail.com (William Josefsson) Date: Mon, 17 Oct 2016 22:05:26 +0800 Subject: [Openstack-operators] Upgrades from Liberty Message-ID: Hi, can anyone pls advice on upgrading from Liberty, can I upgrade to Newton in one go, or will I first have to upgrade to Mitaka, not sure if anything would brake jumping two releases in one go? Also if there's any good documentation explaining how to go about an upgrade from Liberty->Mitaka, or Liberty->Newton I would appreciate if you can share. thx will From serverascode at gmail.com Mon Oct 17 14:09:34 2016 From: serverascode at gmail.com (Curtis) Date: Mon, 17 Oct 2016 08:09:34 -0600 Subject: [Openstack-operators] Ops@Barcelona - Updates In-Reply-To: <5f090efa-d754-939d-4b99-ec16353318ec@openstack.org> References: <5f090efa-d754-939d-4b99-ec16353318ec@openstack.org> Message-ID: On Sun, Oct 16, 2016 at 10:58 PM, Tom Fifield wrote: > Hi all, > > There have been some schedule updates for the Ops design summit sessions: > > https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=Ops+Summit%3A > > New Sessions added: > * Ops Meetups Team > * Some working groups not previously listed > * Horizon: Operator and Plugin Author Feedback > * Neutron: End user and operator feedback > * Barbican: User and Operator Feedback Session > > and some minor room and time changes too - please doublecheck your schedule! > > > ** Call for Moderators ** > > We really need a moderator for: >>> * HAProy, MySQL, Rabbit Tuning > > since it looks like it will be one of the most popular sessions, but we > don't have a moderator yet. > > > These sessions will be canceled unless we can find a moderator: >>> * Fleet Management >>> * Swift >>> * Horizon >>> * Alt Deployment tech >>> * ControlPlane Design(multi region) > I put myself down for "Alt deloyment tech" and "controlplane design" but would be happy to work with anyone else on those. Actually they are both pretty interesting. :) Thanks, Curtis. > > For those of you who want to know what it takes check out the Moderator's > Guide: https://wiki.openstack.org/wiki/Operations/Meetups#Moderators_Guide & > ask questions - we're here to help! > > > > Regards, > > > > Tom > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators -- Blog: serverascode.com From tom at openstack.org Mon Oct 17 15:48:19 2016 From: tom at openstack.org (Tom Fifield) Date: Mon, 17 Oct 2016 23:48:19 +0800 Subject: [Openstack-operators] Fwd: [openstack-dev] [all] indoor climbing break at summit? In-Reply-To: References: Message-ID: <8dbe1ea8-fb86-ac22-359c-188b19f9a7b5@openstack.org> Ops - if you're interested, jump in on the dev list. -------- Forwarded Message -------- Subject: [openstack-dev] [all] indoor climbing break at summit? Date: Mon, 17 Oct 2016 14:53:40 +0100 (BST) From: Chris Dent Reply-To: OpenStack Development Mailing List (not for usage questions) To: OpenStack-dev at lists.openstack.org It turns out that summit this year will be just down the road from Chris Sharma's relatively new indoor climbing gym in Barcelona: http://www.sharmaclimbingbcn.com/ If the fun, frisson and frustration of summit sessions leaves you with the energy or need to pull down, maybe we should arrange a visit if we can find the time. Maybe we can scribble on an etherpad or something. Most of the pictures are of bouldering but since they rent out harnesses I guess there must be roped climbing too. If you're thinking of going apparently they do pre-registration to speed things up: http://www.sharmaclimbingbcn.com/en/gym/ -- Chris Dent ????( ? _ ??) https://anticdent.org/ freenode: cdent tw: @anticdent -------------- next part -------------- __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From stig.openstack at telfer.org Mon Oct 17 17:11:21 2016 From: stig.openstack at telfer.org (Stig Telfer) Date: Mon, 17 Oct 2016 19:11:21 +0200 Subject: [Openstack-operators] [scientific][scientific-wg] This week's Scientific WG meeting Message-ID: <8417732E-5EFC-4B4E-BB64-CE3E2435559A@telfer.org> Hi All - Apologies, there will not be a WG meeting this week: Blair?s en route to Europe and I?ll be travelling home from the OpenStack Identity Federation workshop. Nevertheless I had two items to raise: - People interested in contributing lightning talks for the Scientific OpenStack BoF, please let me know. - Anyone who registered for the evening social but has not yet bought their ticket, please do so before the remaining tickets go public! Best wishes, Stig From mihalis68 at gmail.com Mon Oct 17 17:41:43 2016 From: mihalis68 at gmail.com (Chris Morgan) Date: Mon, 17 Oct 2016 13:41:43 -0400 Subject: [Openstack-operators] Ops@Barcelona - Updates In-Reply-To: <5f090efa-d754-939d-4b99-ec16353318ec@openstack.org> References: <5f090efa-d754-939d-4b99-ec16353318ec@openstack.org> Message-ID: Hello Operators, It's just about one week until Barcelona, so here's a reminder: The operators sessions at Barcelona all have a dedicated etherpads for you to submit talking points. The master list of those for Barcelona is https://etherpad.openstack.org/p/BCN-ops-meetup I am moderating a couple and would like to encourage more content be added to those ether pads, namely Hardware (https://etherpad.openstack.org/p/BCN-ops-hardware) and particularly Ceph https://etherpad.openstack.org/p/BCN-ops-ceph which is in need of more content. I'd be willing to update these ether pads if anyone cares to email me ideas on what we should discuss in response to this email. For those of you going, see you in Spain! Chris On Mon, Oct 17, 2016 at 12:58 AM, Tom Fifield wrote: > Hi all, > > There have been some schedule updates for the Ops design summit sessions: > > https://www.openstack.org/summit/barcelona-2016/summit-sched > ule/global-search?t=Ops+Summit%3A > > New Sessions added: > * Ops Meetups Team > * Some working groups not previously listed > * Horizon: Operator and Plugin Author Feedback > * Neutron: End user and operator feedback > * Barbican: User and Operator Feedback Session > > and some minor room and time changes too - please doublecheck your > schedule! > > > ** Call for Moderators ** > > We really need a moderator for: > >> * HAProy, MySQL, Rabbit Tuning > > since it looks like it will be one of the most popular sessions, but we > don't have a moderator yet. > > > These sessions will be canceled unless we can find a moderator: > >> * Fleet Management > >> * Swift > >> * Horizon > >> * Alt Deployment tech > >> * ControlPlane Design(multi region) > > > For those of you who want to know what it takes check out the Moderator's > Guide: https://wiki.openstack.org/wiki/Operations/Meetups#Moderators_Guide > & ask questions - we're here to help! > > > > Regards, > > > > Tom > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -- Chris Morgan -------------- next part -------------- An HTML attachment was scrubbed... URL: From trandles at lanl.gov Mon Oct 17 17:59:11 2016 From: trandles at lanl.gov (Tim Randles) Date: Mon, 17 Oct 2016 11:59:11 -0600 Subject: [Openstack-operators] [scientific][scientific-wg] This week's Scientific WG meeting In-Reply-To: <8417732E-5EFC-4B4E-BB64-CE3E2435559A@telfer.org> References: <8417732E-5EFC-4B4E-BB64-CE3E2435559A@telfer.org> Message-ID: <9331af27-9381-b67e-b98c-3758821af413@lanl.gov> I have an additional item. I would like to gauge interest in an informal gathering for drinks for those attending Supercomputing 2016 in Salt Lake City. The proposed date is Monday, 14 November beginning at 7:00 PM. Please reply, either to the list or to me directly at trandles at lanl.gov, if you're interested. Thanks, Tim On 10/17/2016 11:11 AM, Stig Telfer wrote: > Hi All - > > Apologies, there will not be a WG meeting this week: Blair?s en route to Europe and I?ll be travelling home from the OpenStack Identity Federation workshop. > > Nevertheless I had two items to raise: > > - People interested in contributing lightning talks for the Scientific OpenStack BoF, please let me know. > - Anyone who registered for the evening social but has not yet bought their ticket, please do so before the remaining tickets go public! > > Best wishes, > Stig > From clint at fewbar.com Mon Oct 17 18:03:04 2016 From: clint at fewbar.com (Clint Byrum) Date: Mon, 17 Oct 2016 11:03:04 -0700 Subject: [Openstack-operators] Ops@Barcelona - Updates In-Reply-To: <5f090efa-d754-939d-4b99-ec16353318ec@openstack.org> References: <5f090efa-d754-939d-4b99-ec16353318ec@openstack.org> Message-ID: <1476727258-sup-2544@fewbar.com> Excerpts from Tom Fifield's message of 2016-10-17 12:58:27 +0800: > Hi all, > > There have been some schedule updates for the Ops design summit sessions: > > https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=Ops+Summit%3A > > > New Sessions added: > * Ops Meetups Team > * Some working groups not previously listed > * Horizon: Operator and Plugin Author Feedback > * Neutron: End user and operator feedback > * Barbican: User and Operator Feedback Session > > and some minor room and time changes too - please doublecheck your schedule! > > > ** Call for Moderators ** > > We really need a moderator for: > >> * HAProy, MySQL, Rabbit Tuning > > since it looks like it will be one of the most popular sessions, but we > don't have a moderator yet. > I've added my name to this one, but a) I'm not a RabbitMQ expert (the other two are my focus) b) I may have trouble getting to the session on time as I might be in a different location until 11:00. It would be great if somebody else would co-moderate. I'm sure there are plenty of willing and able folks who can help. From jon at csail.mit.edu Mon Oct 17 18:49:13 2016 From: jon at csail.mit.edu (Jonathan Proulx) Date: Mon, 17 Oct 2016 14:49:13 -0400 Subject: [Openstack-operators] How do you even test for that? Message-ID: <20161017184913.GF7922@csail.mit.edu> Hi All, Just on the other side of a Kilo->Mitaka upgrade (with a very brief transit through Liberty in the middle). As usual I've caught a few problems in production that I have no idea how I could possibly have tested for because they relate to older running instances and some remnants of older package versions on the production side which wouldn't have existed in test unless I'd installed the test server with Havana and done incremental upgrades starting a fairly wide suite of test instances along the way. First thing that bit me was neutron-db-manage being confused because my production system still had migrations from Havana hanging around. I'm calling this a packaging bug https://bugs.launchpad.net/ubuntu/+source/neutron/+bug/1633576 but I also feel like remembering release names forever might be a good thing. Later I discovered during the Juno release (maybe earlier ones too) making snapshot of running instances populated the snapshot's meta data with "instance_type_vcpu_weight: none". Currently (Mitaka) this value must be an integer if it is set or boot fails. This has the interesting side effect of putting your instance into shutdown/error state if you try a hard reboot of a formerly working instance. I 'fixed' this manually frobbing the DB to set lines where instance_type_vcpu_weight was set to none to be deleted. Does anyone have strategies on how to actually test for problems with "old" artifacts like these? Yes having things running from 18-24month old snapshots is "bad" and yes it would be cleaner to install a fresh control plane at each upgrade and cut over rather than doing an actual in place upgrade. But neither of these sub-optimal patterns are going all the way away anytime soon. -Jon -- From mrhillsman at gmail.com Mon Oct 17 19:12:14 2016 From: mrhillsman at gmail.com (Melvin Hillsman) Date: Mon, 17 Oct 2016 14:12:14 -0500 Subject: [Openstack-operators] Ops@Barcelona - Updates In-Reply-To: References: <5f090efa-d754-939d-4b99-ec16353318ec@openstack.org> Message-ID: <61bc8e5b-f507-02f3-d9bf-5f59c38ca08e@gmail.com> Hey everyone, I as well will be in Barcelona moderating a couple of sessions and would appreciate any comments/concerns/questions/discussions be added to relevant etherpads and/or emailed to me directly. On 10/17/2016 12:41 PM, Chris Morgan wrote: > Hello Operators, It's just about one week until Barcelona, so here's a > reminder: > > The operators sessions at Barcelona all have a dedicated etherpads for > you to submit talking points. > > The master list of those for Barcelona is > > + > > I am moderating a couple and would like to encourage more content be > added to those ether pads, namely > > Hardware (https://etherpad.openstack.org/p/BCN-ops-hardware) and > particularly > > Ceph https://etherpad.openstack.org/p/BCN-ops-ceph which is in need of > more content. > > I'd be willing to update these ether pads if anyone cares to email me > ideas on what we should discuss in response to this email. > > For those of you going, see you in Spain! > > Chris > > On Mon, Oct 17, 2016 at 12:58 AM, Tom Fifield > wrote: > > Hi all, > > There have been some schedule updates for the Ops design summit > sessions: > > https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=Ops+Summit%3A > > > > New Sessions added: > * Ops Meetups Team > * Some working groups not previously listed > * Horizon: Operator and Plugin Author Feedback > * Neutron: End user and operator feedback > * Barbican: User and Operator Feedback Session > > and some minor room and time changes too - please doublecheck your > schedule! > > > ** Call for Moderators ** > > We really need a moderator for: > >> * HAProy, MySQL, Rabbit Tuning > > since it looks like it will be one of the most popular sessions, > but we don't have a moderator yet. > > > These sessions will be canceled unless we can find a moderator: > >> * Fleet Management > >> * Swift > >> * Horizon > >> * Alt Deployment tech > >> * ControlPlane Design(multi region) > > > For those of you who want to know what it takes check out the > Moderator's Guide: > https://wiki.openstack.org/wiki/Operations/Meetups#Moderators_Guide > > & ask questions - we're here to help! > > > > Regards, > > > > Tom > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > > > > -- > Chris Morgan > > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators -- Kind regards, -- Melvin Hillsman Ops Technical Lead OpenStack Innovation Center mrhillsman at gmail.com mobile: (210) 413-1659 office: (210) 312-1267 Learner | Ideation | Belief | Responsibility | Command http://osic.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From trandles at lanl.gov Mon Oct 17 22:19:50 2016 From: trandles at lanl.gov (Tim Randles) Date: Mon, 17 Oct 2016 16:19:50 -0600 Subject: [Openstack-operators] [scientific][scientific-wg] This week's Scientific WG meeting In-Reply-To: <9331af27-9381-b67e-b98c-3758821af413@lanl.gov> References: <8417732E-5EFC-4B4E-BB64-CE3E2435559A@telfer.org> <9331af27-9381-b67e-b98c-3758821af413@lanl.gov> Message-ID: <54fe23f4-c713-c28e-67bc-7dfea1a24fcf@lanl.gov> Someone pointed out that I misread the SC schedule and the opening reception is Monday at 19:00. Disregard... Tim On 10/17/2016 11:59 AM, Tim Randles wrote: > I have an additional item. I would like to gauge interest in an > informal gathering for drinks for those attending Supercomputing 2016 in > Salt Lake City. The proposed date is Monday, 14 November beginning at > 7:00 PM. Please reply, either to the list or to me directly at > trandles at lanl.gov, if you're interested. > > Thanks, > Tim > > On 10/17/2016 11:11 AM, Stig Telfer wrote: >> Hi All - >> >> Apologies, there will not be a WG meeting this week: Blair?s en route >> to Europe and I?ll be travelling home from the OpenStack Identity >> Federation workshop. >> >> Nevertheless I had two items to raise: >> >> - People interested in contributing lightning talks for the Scientific >> OpenStack BoF, please let me know. >> - Anyone who registered for the evening social but has not yet bought >> their ticket, please do so before the remaining tickets go public! >> >> Best wishes, >> Stig >> > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators From matt at mattfischer.com Mon Oct 17 23:45:07 2016 From: matt at mattfischer.com (Matt Fischer) Date: Mon, 17 Oct 2016 17:45:07 -0600 Subject: [Openstack-operators] How do you even test for that? In-Reply-To: <20161017184913.GF7922@csail.mit.edu> References: <20161017184913.GF7922@csail.mit.edu> Message-ID: This does not cover all your issues but after seeing mysql bugs between I and J and also J to K we now export and restore production control plane data into a dev environment to test the upgrades. If we have issues we destroy this environment and run it again. For longer running instances that's tough but we try to catch those in our shared dev environment or staging with regression tests. This is also where we catch issues with outside hardware interactions like load balancers and storage. For your other issue was there a warning or depreciation in the logs for that? That's always at the top of our checklist. On Oct 17, 2016 12:51 PM, "Jonathan Proulx" wrote: > Hi All, > > Just on the other side of a Kilo->Mitaka upgrade (with a very brief > transit through Liberty in the middle). > > As usual I've caught a few problems in production that I have no idea > how I could possibly have tested for because they relate to older > running instances and some remnants of older package versions on the > production side which wouldn't have existed in test unless I'd > installed the test server with Havana and done incremental upgrades > starting a fairly wide suite of test instances along the way. > > First thing that bit me was neutron-db-manage being confused because > my production system still had migrations from Havana hanging around. > I'm calling this a packaging bug > https://bugs.launchpad.net/ubuntu/+source/neutron/+bug/1633576 but I > also feel like remembering release names forever might be a good > thing. > > Later I discovered during the Juno release (maybe earlier ones too) > making snapshot of running instances populated the snapshot's meta > data with "instance_type_vcpu_weight: none". Currently (Mitaka) this > value must be an integer if it is set or boot fails. This has the > interesting side effect of putting your instance into shutdown/error > state if you try a hard reboot of a formerly working instance. I > 'fixed' this manually frobbing the DB to set lines where > instance_type_vcpu_weight was set to none to be deleted. > > Does anyone have strategies on how to actually test for problems with > "old" artifacts like these? > > Yes having things running from 18-24month old snapshots is "bad" and > yes it would be cleaner to install a fresh control plane at each > upgrade and cut over rather than doing an actual in place upgrade. But > neither of these sub-optimal patterns are going all the way away > anytime soon. > > -Jon > > -- > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -------------- next part -------------- An HTML attachment was scrubbed... URL: From christophe.sauthier at objectif-libre.com Tue Oct 18 08:37:17 2016 From: christophe.sauthier at objectif-libre.com (Christophe Sauthier) Date: Tue, 18 Oct 2016 10:37:17 +0200 Subject: [Openstack-operators] [cloudkitty] Looking for use-cases to extend Cloudkitty, let's meet in Barcelona ! Message-ID: <35b46129dd6d7f12417b2ae42260ae21@objectif-libre.com> Dear ops ! In the Cloudkitty team (as a reminder it is used for chargeback and rating and it is in the Big Tent) we are heavily looking for use-cases that you're experiencing. We are doing our best with the people we have around us, but I am sure we are lacking many use-cases. I am sure many of you will go to the OpenStack Summit in Barcelona, and the good thing is so are we ! I would be really really interested to meet you in person, especially if you have a chargeback or rating project/need or if you are already doing so... Please drop me an email so that we can arrange a quick chat there or catch me at the D26 booth (Objectif Lilbre) where I should be hanging when I am not attending a talk... Christohe Sauthier, PTL of Cloudkitty ---- Christophe Sauthier Mail : christophe.sauthier at objectif-libre.com CEO Mob : +33 (0) 6 16 98 63 96 Objectif Libre URL : www.objectif-libre.com Au service de votre Cloud Twitter : @objectiflibre Suivez les actualit?s OpenStack en fran?ais en vous abonnant ? la Pause OpenStack http://olib.re/pause-openstack From jon at csail.mit.edu Tue Oct 18 14:00:32 2016 From: jon at csail.mit.edu (Jonathan D. Proulx) Date: Tue, 18 Oct 2016 10:00:32 -0400 Subject: [Openstack-operators] How do you even test for that? In-Reply-To: References: <20161017184913.GF7922@csail.mit.edu> Message-ID: <20161018140032.GA9337@csail.mit.edu> On Mon, Oct 17, 2016 at 05:45:07PM -0600, Matt Fischer wrote: :This does not cover all your issues but after seeing mysql bugs between I :and J and also J to K we now export and restore production control plane :data into a dev environment to test the upgrades. If we have issues we :destroy this environment and run it again. Yeah I learned that one the hard way a while back (maybe havana?), you ever revert a production OpenStack upgrade ;) A copy of the production DB goes into test pretty much immediately prior to upgrade tests. :For longer running instances that's tough but we try to catch those in our :shared dev environment or staging with regression tests. This is also where :we catch issues with outside hardware interactions like load balancers and :storage. : :For your other issue was there a warning or depreciation in the logs for :that? That's always at the top of our checklist. Not that I saw or could find post facto on the controllers or hypervisors nova and glance logs. But it's not so much the specific issue which is dealt with as the class of issue. Compatibility of artifacts created under Latest-N where N>1 It entirely possible there isn't a good way to test. I mean I can't think of one but I know some of you out there are smarter than me so hope springs eternal. Perhaps designing better post upgrade validation that focuses on oldest artifacts, or various generations of them is the best I can hope for. At least then Ops would catch them and start working on a fix ASAP. -Jon :On Oct 17, 2016 12:51 PM, "Jonathan Proulx" wrote: : :> Hi All, :> :> Just on the other side of a Kilo->Mitaka upgrade (with a very brief :> transit through Liberty in the middle). :> :> As usual I've caught a few problems in production that I have no idea :> how I could possibly have tested for because they relate to older :> running instances and some remnants of older package versions on the :> production side which wouldn't have existed in test unless I'd :> installed the test server with Havana and done incremental upgrades :> starting a fairly wide suite of test instances along the way. :> :> First thing that bit me was neutron-db-manage being confused because :> my production system still had migrations from Havana hanging around. :> I'm calling this a packaging bug :> https://bugs.launchpad.net/ubuntu/+source/neutron/+bug/1633576 but I :> also feel like remembering release names forever might be a good :> thing. :> :> Later I discovered during the Juno release (maybe earlier ones too) :> making snapshot of running instances populated the snapshot's meta :> data with "instance_type_vcpu_weight: none". Currently (Mitaka) this :> value must be an integer if it is set or boot fails. This has the :> interesting side effect of putting your instance into shutdown/error :> state if you try a hard reboot of a formerly working instance. I :> 'fixed' this manually frobbing the DB to set lines where :> instance_type_vcpu_weight was set to none to be deleted. :> :> Does anyone have strategies on how to actually test for problems with :> "old" artifacts like these? :> :> Yes having things running from 18-24month old snapshots is "bad" and :> yes it would be cleaner to install a fresh control plane at each :> upgrade and cut over rather than doing an actual in place upgrade. But :> neither of these sub-optimal patterns are going all the way away :> anytime soon. :> :> -Jon :> :> -- :> :> _______________________________________________ :> OpenStack-operators mailing list :> OpenStack-operators at lists.openstack.org :> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators :> -- From openstack at medberry.net Tue Oct 18 15:03:49 2016 From: openstack at medberry.net (David Medberry) Date: Tue, 18 Oct 2016 09:03:49 -0600 Subject: [Openstack-operators] Ops Lightning Talks Message-ID: Dear Readers, We have TWO back-to-back (double sessions) of Lightning Talks for Operators. And by "for Operators" I mean largely that Operators will be the audience. If you have an OpenStack problem, thingamajig, technique, simplification, gadget, whatzit that readily lends itself to a Lightning talk. please email me and put it into the etherpad here: https://etherpad.openstack.org/p/BCN-ops-lightning-talks There are two sessions... but I'd prefer to fill the first one and cancel the second one. But if your schedule dictates that you are only available for the second, we'll hold both. (And in spite of my natural levity, it can be a serious talk, a serious problem, or something completely frivolous but there might be tomatoes in the audience so watch it.) -dave David Medberry OpenStack Guy and your friendly moderator. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dms at redhat.com Tue Oct 18 15:48:13 2016 From: dms at redhat.com (David Moreau Simard) Date: Tue, 18 Oct 2016 11:48:13 -0400 Subject: [Openstack-operators] If you are deploying with or against RDO, we'd like to hear from you Message-ID: Hi openstack-operators, We're currently gathering feedback from RDO users and operators with the help of a very small and anonymous survey [1]. It's not very in-depth as we value your time and we all know filling surveys is boring but it'd be very valuable to us. If you'd like to chat RDO more in details at the upcoming OpenStack summit: what we're doing right, what we're doing wrong or even if you have questions about either starting to use it or how to get involved... Feel free to get in touch with me or join us at the RDO community meetup [2]. Thanks ! [1]: https://www.redhat.com/archives/rdo-list/2016-October/msg00128.html [2]: https://www.eventbrite.com/e/an-evening-of-ceph-and-rdo-tickets-28022550202 David Moreau Simard Senior Software Engineer | Openstack RDO dmsimard = [irc, github, twitter] From pieter.kruithof.jr at intel.com Tue Oct 18 15:56:23 2016 From: pieter.kruithof.jr at intel.com (Kruithof Jr, Pieter) Date: Tue, 18 Oct 2016 15:56:23 +0000 Subject: [Openstack-operators] If you are deploying with or against RDO, we'd like to hear from you In-Reply-To: References: Message-ID: <26A33525-04F9-4422-BC2D-39C306D29222@intel.com> Hi David, Can you talk a bit about how you intend to share the results and/or data from the survey? Thanks, Piet On 10/18/16, 9:48 AM, "David Moreau Simard" wrote: Hi openstack-operators, We're currently gathering feedback from RDO users and operators with the help of a very small and anonymous survey [1]. It's not very in-depth as we value your time and we all know filling surveys is boring but it'd be very valuable to us. If you'd like to chat RDO more in details at the upcoming OpenStack summit: what we're doing right, what we're doing wrong or even if you have questions about either starting to use it or how to get involved... Feel free to get in touch with me or join us at the RDO community meetup [2]. Thanks ! [1]: https://www.redhat.com/archives/rdo-list/2016-October/msg00128.html [2]: https://www.eventbrite.com/e/an-evening-of-ceph-and-rdo-tickets-28022550202 David Moreau Simard Senior Software Engineer | Openstack RDO dmsimard = [irc, github, twitter] _______________________________________________ OpenStack-operators mailing list OpenStack-operators at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators From alawson at aqorn.com Tue Oct 18 16:48:03 2016 From: alawson at aqorn.com (Adam Lawson) Date: Tue, 18 Oct 2016 09:48:03 -0700 Subject: [Openstack-operators] [Neutron][LBaaS] Architecture/Wiring for LBaaS service extension Message-ID: Greetings fellow stackers. So I found the list of service extensions [1,] the modular architecture [2] and info re the driver API [3]. The architecture diagram [4] doesn't show up in link3 and furthermore shows it was last updated in 2014 which tells me it has probably changed since then. Where is the best/most recent info for Neutron's LBaaS service extension? [1] http://docs.openstack.org/developer/neutron/devref/service_extensions.html [2] https://wiki.openstack.org/wiki/Neutron/LBaaS/Architecture [3] https://wiki.openstack.org/wiki/Neutron/LBaaS/DriverAPI [4] https://wiki.openstack.org/wiki/File:Lbaas_arch.JPG //adam *Adam Lawson* Principal Architect, CEO Office: +1-916-794-5706 -------------- next part -------------- An HTML attachment was scrubbed... URL: From clint at fewbar.com Tue Oct 18 17:03:15 2016 From: clint at fewbar.com (Clint Byrum) Date: Tue, 18 Oct 2016 10:03:15 -0700 Subject: [Openstack-operators] How do you even test for that? In-Reply-To: <20161017184913.GF7922@csail.mit.edu> References: <20161017184913.GF7922@csail.mit.edu> Message-ID: <1476764283-sup-6972@fewbar.com> Excerpts from Jonathan Proulx's message of 2016-10-17 14:49:13 -0400: > Hi All, > > Just on the other side of a Kilo->Mitaka upgrade (with a very brief > transit through Liberty in the middle). > > As usual I've caught a few problems in production that I have no idea > how I could possibly have tested for because they relate to older > running instances and some remnants of older package versions on the > production side which wouldn't have existed in test unless I'd > installed the test server with Havana and done incremental upgrades > starting a fairly wide suite of test instances along the way. > In general, modifying _anything_ in place is hard to test. You're much better off with as much immutable content as possible on all of your nodes. If you've been wondering what this whole Docker nonsense is about, well, that's what it's about. You docker build once per software release attempt, and then mount data read/write, and configs readonly. Both openstack-ansible and kolla are deployment projects that try to do some of this via lxc or docker, IIRC. This way when you test your container image in test, you copy it out to prod, start up the new containers, stop the old ones, and you know that _at least_ you don't have older stuff running anymore. Data and config are still likely to be the source of issues, but there are other ways to help test that. > First thing that bit me was neutron-db-manage being confused because > my production system still had migrations from Havana hanging around. > I'm calling this a packaging bug > https://bugs.launchpad.net/ubuntu/+source/neutron/+bug/1633576 but I > also feel like remembering release names forever might be a good > thing. > Ouch, indeed one of the first things to do _before_ an upgrade is to run the migrations of the current version to make sure your schema is up to date. Also it's best to make sure you have _all_ of the stable updates before you do that, since it's possible fixes have landed in the migrations that are meant to smooth the upgrade process. > Later I discovered during the Juno release (maybe earlier ones too) > making snapshot of running instances populated the snapshot's meta > data with "instance_type_vcpu_weight: none". Currently (Mitaka) this > value must be an integer if it is set or boot fails. This has the > interesting side effect of putting your instance into shutdown/error > state if you try a hard reboot of a formerly working instance. I > 'fixed' this manually frobbing the DB to set lines where > instance_type_vcpu_weight was set to none to be deleted. > This one is tough because it is clearly data and state related. It's hard to say how you got the 'none' values in there instead of ints. Somebody else suggested making db snapshots and loading them into a test control plane. That seems like an easy-ish one to do some surface level finding, but the fact is it could also be super dangerous if not isolated well, and the more isolation, the less of a real simulation it is. > Does anyone have strategies on how to actually test for problems with > "old" artifacts like these? > > Yes having things running from 18-24month old snapshots is "bad" and > yes it would be cleaner to install a fresh control plane at each > upgrade and cut over rather than doing an actual in place upgrade. But > neither of these sub-optimal patterns are going all the way away > anytime soon. > In-place upgrades must work. If they don't, please file bugs and complain loudly. :) From dms at redhat.com Tue Oct 18 17:52:37 2016 From: dms at redhat.com (David Moreau Simard) Date: Tue, 18 Oct 2016 13:52:37 -0400 Subject: [Openstack-operators] If you are deploying with or against RDO, we'd like to hear from you In-Reply-To: <26A33525-04F9-4422-BC2D-39C306D29222@intel.com> References: <26A33525-04F9-4422-BC2D-39C306D29222@intel.com> Message-ID: Hi Pieter, We are planning on doing a post on the RDO blog [1] of the aggregated (further anonymized if necessary) data once the survey is closed. The last question where we ask for your contact information will obviously not be published. [1]: https://www.rdoproject.org/blog/ David Moreau Simard Senior Software Engineer | Openstack RDO dmsimard = [irc, github, twitter] On Tue, Oct 18, 2016 at 11:56 AM, Kruithof Jr, Pieter wrote: > Hi David, > > Can you talk a bit about how you intend to share the results and/or data from the survey? > > Thanks, > > Piet > > > On 10/18/16, 9:48 AM, "David Moreau Simard" wrote: > > Hi openstack-operators, > > We're currently gathering feedback from RDO users and operators with > the help of a very small and anonymous survey [1]. > > It's not very in-depth as we value your time and we all know filling > surveys is boring but it'd be very valuable to us. > > If you'd like to chat RDO more in details at the upcoming OpenStack > summit: what we're doing right, what we're doing wrong or even if you > have questions about either starting to use it or how to get > involved... > Feel free to get in touch with me or join us at the RDO community meetup [2]. > > Thanks ! > > [1]: https://www.redhat.com/archives/rdo-list/2016-October/msg00128.html > [2]: https://www.eventbrite.com/e/an-evening-of-ceph-and-rdo-tickets-28022550202 > > David Moreau Simard > Senior Software Engineer | Openstack RDO > > dmsimard = [irc, github, twitter] > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > From heidijoy at openstack.org Tue Oct 18 18:36:37 2016 From: heidijoy at openstack.org (Heidi Joy Tretheway) Date: Tue, 18 Oct 2016 11:36:37 -0700 Subject: [Openstack-operators] User Survey results & DIY survey analysis tool Message-ID: <68EE5BAC-5A62-4777-9CF4-820999EE4552@openstack.org> Hello Operators, First, a HUGE thank you for contributing to our eighth OpenStack User Survey. I wanted to alert you that the results are now available. This cycle, we focused on a deployments-only update of the charts most often cited by our community. On openstack.org/user-survey , you?ll be able to download the 12-page PDF ?highlights? report (as well as past reports), view a 2-minute video overview , and learn more about these key findings: NPS for deployments continues to tick up, 8 points higher than a year ago. The share of deployments in production is 20% higher than a year ago. Cost, operational efficiency and innovation are the top three business drivers. Significantly higher interest in NFV and bare metal, and containers leads the list of emerging technologies three cycles in a row. OpenStack is adopted by companies of every size. Nearly one-quarter of users are companies smaller than 100 people. Want to dig deeper? We?re unveiling a beta version of our User Survey analysis tool at www.openstack.org/analytics that enables you to compare 2016 data to prior data, and to apply six global filters to evaluate key metrics that matter to you. We look forward to February, when we?ll ask you to answer the full-length User Survey. The report will be available by our Boston Summit, May 8?12, 2017. Please feel free to send me any questions or feedback about either the report or analysis tool. Heidi Joy Tretheway Senior Marketing Manager, OpenStack Foundation 503 816 9769 | Skype: heidi.tretheway -------------- next part -------------- An HTML attachment was scrubbed... URL: From jon at csail.mit.edu Tue Oct 18 18:50:11 2016 From: jon at csail.mit.edu (Jonathan D. Proulx) Date: Tue, 18 Oct 2016 14:50:11 -0400 Subject: [Openstack-operators] PCI passthrough trying to use busy resource? Message-ID: <20161018185011.GB9337@csail.mit.edu> Hi all, I have a test GPU system that seemed to be working properly under Kilo running 1 and 2 GPU instnace types on an 8GPU server. After Mitaka upgrade it seems to alway try and assing the same Device which is alredy in use rather than pick one of the 5 currently available. Build of instance 9542cc63-793c-440e-9a57-cc06eb401839 was re-scheduled: Requested operation is not valid: PCI device 0000:09:00.0 is in use by driver QEMU, domain instance-000abefa _do_build_and_run_instance /usr/lib/python2.7/dist-packages/nova/compute/manager.py:1945 it tries to schedule 5 times, but each time uses the same busy device. Since there are currently 3 in use if it had just picked a new one each time In trying to debug this I realize I have no idea how devices are selected. Does OpenStack track which PCI devices are claimed or is that a libvirt function and in either case where woudl I look to find out what it thinks the current state is? Thanks, -Jon -- From jon at csail.mit.edu Tue Oct 18 19:05:46 2016 From: jon at csail.mit.edu (Jonathan D. Proulx) Date: Tue, 18 Oct 2016 15:05:46 -0400 Subject: [Openstack-operators] PCI passthrough trying to use busy resource? In-Reply-To: <20161018185011.GB9337@csail.mit.edu> References: <20161018185011.GB9337@csail.mit.edu> Message-ID: <20161018190546.GC9337@csail.mit.edu> Answering my own questions a bti faster than I though I could. nova DB has a pci_devices table. what happened was there was in intermediate state where the pci_passthrough_whitelist value on the hypervisor was missin. apparently during taht time the row for this hypervisor in the pci_devices table got marked as deleted. Teh when the nova.conf go fixed they got recreated (even though the old 'deleted' resources we really actively in use) so I end up this colliding state: > SELECT created_at,deleted_at,deleted,id,compute_node_id,address,status,instance_uuid FROM pci_devices WHERE address='0000:09:00.0'; +---------------------+---------------------+---------+----+-----------------+--------------+-----------+--------------------------------------+ | created_at | deleted_at | deleted | id | compute_node_id | address | status | instance_uuid | +---------------------+---------------------+---------+----+-----------------+--------------+-----------+--------------------------------------+ | 2016-07-06 00:12:30 | 2016-10-13 21:04:53 | 4 | 4 | 90 | 0000:09:00.0 | allocated | 9269391a-4ce4-4c8d-993d-5ad7a9c3879b | | 2016-10-18 18:01:35 | NULL | 0 | 12 | 90 | 0000:09:00.0 | available | NULL | +---------------------+---------------------+---------+----+-----------------+--------------+-----------+--------------------------------------+ since it's only really 3 entries I can fix this by hand then head over to bug report land. -Jon On Tue, Oct 18, 2016 at 02:50:11PM -0400, Jonathan D. Proulx wrote: :Hi all, : :I have a test GPU system that seemed to be working properly under Kilo :running 1 and 2 GPU instnace types on an 8GPU server. : :After Mitaka upgrade it seems to alway try and assing the same Device :which is alredy in use rather than pick one of the 5 currently :available. : : : Build of instance 9542cc63-793c-440e-9a57-cc06eb401839 was : re-scheduled: Requested operation is not valid: PCI device : 0000:09:00.0 is in use by driver QEMU, domain instance-000abefa : _do_build_and_run_instance : /usr/lib/python2.7/dist-packages/nova/compute/manager.py:1945 : :it tries to schedule 5 times, but each time uses the same busy :device. Since there are currently 3 in use if it had just picked a :new one each time : :In trying to debug this I realize I have no idea how devices are :selected. Does OpenStack track which PCI devices are claimed or is :that a libvirt function and in either case where woudl I look to find :out what it thinks the current state is? : :Thanks, :-Jon :-- -- From sean at coreitpro.com Wed Oct 19 16:22:53 2016 From: sean at coreitpro.com (Sean M. Collins) Date: Wed, 19 Oct 2016 16:22:53 +0000 Subject: [Openstack-operators] [Nova][icehouse]Any way to rotating log by size In-Reply-To: <159656130556744984BEE6E8683CE872135CBEB9@G01JPEXMBYT04> References: <159656130556744984BEE6E8683CE872135CBEB9@G01JPEXMBYT04> Message-ID: <01000157ddc0a81d-e5e88f83-c750-4bc6-80f3-0bc82fdd67d6-000000@email.amazonses.com> Zhang, Peng wrote: > [logger_root] > level = DEBUG So, you're setting the logging to level to DEBUG - if I understand correctly. In a production environment that is going to fill up your disks very quickly. Which is why even running cron hourly to rotate the files still results in a full disk. -- Sean M. Collins From serverascode at gmail.com Wed Oct 19 16:34:02 2016 From: serverascode at gmail.com (Curtis) Date: Wed, 19 Oct 2016 10:34:02 -0600 Subject: [Openstack-operators] [telecom-nfv] Barcelona Sessions Message-ID: Hi All, There are two operator related telecom/nfv working sessions at the Barcelona summit. Thanks to the User Committee and Ops teams for allowing us the time for both these sessions. * Tuesday Ops Summit session Session - https://www.openstack.org/summit/barcelona-2016/summit-schedule/events/17341/ops-telecomnfv Etherpad - https://etherpad.openstack.org/p/BCN-ops-telco-nfv * Wednesday Ops Telecom/NFV Functional Team Meeting Session - https://www.openstack.org/summit/barcelona-2016/summit-schedule/events/16768/openstack-operators-telecomnfv-functional-team Etherpad - https://etherpad.openstack.org/p/BCN-ops-telcom-nfv-team If you are interested in attending, please have a look at the above etherpads and feel free to add any topics, questions, etc to them so that we can make good use of our time there. :) If you have any colleagues that might be interested in either or both sessions, please pass this on, as we are especially looking for participants in the ops telecom/nfv functional team. Thanks, Curtis. From mkassawara at gmail.com Wed Oct 19 17:05:33 2016 From: mkassawara at gmail.com (Matt Kassawara) Date: Wed, 19 Oct 2016 11:05:33 -0600 Subject: [Openstack-operators] [Neutron][LBaaS] Architecture/Wiring for LBaaS service extension In-Reply-To: References: Message-ID: Does this [1] help? [1] http://docs.openstack.org/newton/networking-guide/config-lbaas.html On Tue, Oct 18, 2016 at 10:48 AM, Adam Lawson wrote: > Greetings fellow stackers. > > So I found the list of service extensions [1,] the modular architecture > [2] and info re the driver API [3]. The architecture diagram [4] doesn't > show up in link3 and furthermore shows it was last updated in 2014 which > tells me it has probably changed since then. > > Where is the best/most recent info for Neutron's LBaaS service extension? > > [1] http://docs.openstack.org/developer/neutron/devref/ > service_extensions.html > [2] https://wiki.openstack.org/wiki/Neutron/LBaaS/Architecture > [3] https://wiki.openstack.org/wiki/Neutron/LBaaS/DriverAPI > [4] https://wiki.openstack.org/wiki/File:Lbaas_arch.JPG > > //adam > > *Adam Lawson* > > Principal Architect, CEO > Office: +1-916-794-5706 > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lubosz.kosnik at intel.com Wed Oct 19 17:51:44 2016 From: lubosz.kosnik at intel.com (Kosnik, Lubosz) Date: Wed, 19 Oct 2016 17:51:44 +0000 Subject: [Openstack-operators] [Neutron][LBaaS] Architecture/Wiring for LBaaS service extension In-Reply-To: References: Message-ID: Also you should take a look into octavia documentation [1]. Please remember that we?re in process of killing Neutron-LBaaS v2 (v1 already removed) and we?re working on making Octavia default LBaaS in OpenStack. For now you still need to use nlbaas to use Octavia - which is a reference driver for nlbaas. [1] http://docs.openstack.org/developer/octavia/ Lubosz Kosnik Cloud Software Engineer OSIC lubosz.kosnik at intel.com On Oct 19, 2016, at 12:05 PM, Matt Kassawara > wrote: Does this [1] help? [1] http://docs.openstack.org/newton/networking-guide/config-lbaas.html On Tue, Oct 18, 2016 at 10:48 AM, Adam Lawson > wrote: Greetings fellow stackers. So I found the list of service extensions [1,] the modular architecture [2] and info re the driver API [3]. The architecture diagram [4] doesn't show up in link3 and furthermore shows it was last updated in 2014 which tells me it has probably changed since then. Where is the best/most recent info for Neutron's LBaaS service extension? [1] http://docs.openstack.org/developer/neutron/devref/service_extensions.html [2] https://wiki.openstack.org/wiki/Neutron/LBaaS/Architecture [3] https://wiki.openstack.org/wiki/Neutron/LBaaS/DriverAPI [4] https://wiki.openstack.org/wiki/File:Lbaas_arch.JPG //adam Adam Lawson Principal Architect, CEO Office: +1-916-794-5706 _______________________________________________ OpenStack-operators mailing list OpenStack-operators at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators _______________________________________________ OpenStack-operators mailing list OpenStack-operators at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators -------------- next part -------------- An HTML attachment was scrubbed... URL: From matt at mattfischer.com Wed Oct 19 18:26:00 2016 From: matt at mattfischer.com (Matt Fischer) Date: Wed, 19 Oct 2016 12:26:00 -0600 Subject: [Openstack-operators] [Nova][icehouse]Any way to rotating log by size In-Reply-To: <01000157ddc0a81d-e5e88f83-c750-4bc6-80f3-0bc82fdd67d6-000000@email.amazonses.com> References: <159656130556744984BEE6E8683CE872135CBEB9@G01JPEXMBYT04> <01000157ddc0a81d-e5e88f83-c750-4bc6-80f3-0bc82fdd67d6-000000@email.amazonses.com> Message-ID: On Wed, Oct 19, 2016 at 10:22 AM, Sean M. Collins wrote: > Zhang, Peng wrote: > > [logger_root] > > level = DEBUG > > > So, you're setting the logging to level to DEBUG - if I understand > correctly. In a production environment that is going to fill up your > disks very quickly. Which is why even running cron hourly to rotate the > files still results in a full disk. > > > In addition for some services logging at debug=True can log things like tokens which you probably don't want logged. I would advise against running anything in production with this on. -------------- next part -------------- An HTML attachment was scrubbed... URL: From armamig at gmail.com Wed Oct 19 22:46:27 2016 From: armamig at gmail.com (Armando M.) Date: Wed, 19 Oct 2016 15:46:27 -0700 Subject: [Openstack-operators] [Neutron] retiring python-neutron-pd-driver Message-ID: To whom it may concern, I have started the procedure [1] to retire project [2]. If you are affected by this, this is the last opportunity to provide feedback. That said, users should be able to use the in tree version of dibbler as documented in [3]. Cheers, Armando [1] https://review.openstack.org/#/q/I77099ba826b8c7d28379a823b4dc7 4aa65e653d8 [2] http://git.openstack.org/cgit/openstack/python-neutron-pd-driver/ [3] http://docs.openstack.org/newton/networking-guide/ config-ipv6.html#configuring-the-dibbler-server -------------- next part -------------- An HTML attachment was scrubbed... URL: From changzhi1990 at gmail.com Thu Oct 20 10:45:09 2016 From: changzhi1990 at gmail.com (zhi) Date: Thu, 20 Oct 2016 18:45:09 +0800 Subject: [Openstack-operators] [openstack-dev][tacker] Unexpected files are created after installing Tacker Message-ID: hi, tackers I have some questions about installing Tacker. I have an physical environment which installed Tacker a few days ago. Today I have pulled the latest code. Then I upgrade the database by using " tacker-db-manage --database-connection "mysql+pymysql:// root:12345 at 127.0.0.1/tacker?charset=utf8" upgrade head " At last I execute this command " python setup.py install ". I uploaded the output at [1]. But I met some issues about the files which were generated. There is directory named " tacker/db/vm " in the path " /usr/lib/python2.7/site-packages ". But this directory doesn't exist in the source code. In the [1], I found this command "creating /usr/lib/python2.7/site-packages/tacker/db/vm" . But the directory " tacker/db/vm " doesn't exist anymore. Because this wrong directory and files in this directory, tacker-server doesn't startup correctly. Could someone tell me the reason why this directory was generated? Thanks [1]. http://paste.openstack.org/show/586528/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From mvanwink at rackspace.com Thu Oct 20 11:53:08 2016 From: mvanwink at rackspace.com (Matt Van Winkle) Date: Thu, 20 Oct 2016 11:53:08 +0000 Subject: [Openstack-operators] [Large Deployments Team] Meeting Reminder - October 20, 2016 16:00 UTC Message-ID: Hello LDT folks! Just a reminder that the monthly meeting is in a little over 3 hours. I apologize for the short notice. I?d like to discuss the upcoming summit, and what our plan is for the time that we have. See you all in #openstack-operators! Thanks! VW -------------- next part -------------- An HTML attachment was scrubbed... URL: From aschultz at redhat.com Thu Oct 20 14:38:15 2016 From: aschultz at redhat.com (Alex Schultz) Date: Thu, 20 Oct 2016 08:38:15 -0600 Subject: [Openstack-operators] [openstack-dev] [puppet][tripleo][fuel] Upcoming changes to defaults around using processor count for worker configurations In-Reply-To: References: Message-ID: Hey Sergii, On Thu, Oct 20, 2016 at 3:34 AM, Sergii Golovatiuk wrote: > Hi, > > > On Thu, Sep 29, 2016 at 11:57 PM, Alex Schultz wrote: >> >> Hello all, >> >> So for many years we've been using either the service defaults >> (usually python determined processor count) or the $processorcount >> fact from facter in puppet for worker configuration options for the >> OpenStack services. If you are currently using the default values >> provided by the puppet modules, you will be affected by this upcoming >> change. After much discussion and feedback from deployers, we've >> decided to change this to a default value that has a cap on it. This >> is primarily from the feedback when deploying on physical hardware >> where processor counts can be 32, 48 or even 512. These values can >> lead to excessive memory consumption or errors due to connection >> limits (mysql/rabbit). As such we've come up with a new fact to that >> will be used instead of $processorcount. >> >> The new fact is called $os_workers[0]. This fact uses the >> $processorcount to help weigh in on the number of workers to configure >> and won't be less than 2 but is capped at 8. The $os_workers fact >> will use the larger value of either '2' or '# of processors / 4' but >> will not exceed 8. The primary goal of this is to improve the user >> experience when people install services using the puppet modules and >> without having to tune all of these worker values. We plan on >> implementing this for all modules as part of the Ocata cycle. This >> work will can be tracked using the os_workers-fact[1] gerrit topic. >> It should be noted that we have implemented this fact in such a way >> that operators are free to override it using an external fact to >> provide their own values as well. If you are currently specifying >> your own values for the worker configurations in your manifests then >> this change will not affect you. If you have been relying on the >> defaults and wish to continue to use the $processorcount logic, we >> would recommend either implementing your own external fact[2] for this >> or updating your manifests to provide $::processorcount to the workers >> configuration. > > > This doesn't help a lot. I saw the case where 8 neutron-servers allocated > 6GB of RAM. From OOM perspective, the biggest process was MySQL (or Rabbit) > as it doesn't calculate the sum of processes. Instead of killing > neutron-server it killed MySQL to release some RAM to node. IMO, I would > focus on cgroup limitation for OpenStack services as that will allow the > operator to specify some the upper limit of CPU and RAM usage for every > service. > Yea this isn't a fix for excessive ram utilization of the services themselves. This primarily reduces the default consumption. I recently went to implement using this fact in tripleo[0] and it reduced the default memory utilization of the undercloud by a little over 1gig[1] in our CI runs. This was on a 4 cpu vhost so we basically halfed the number of running python processes by switching for the 17 different services that may run. So yea this isn't the end solution for dealing with excessive memory consumption but rather an improvement over the existing processor count usage. The primary target for this is really to prevent accidental blowing up of mysql connections when a host has large number of cpus but also reducing the overall number of processes by default. Thanks, -Alex [0] https://review.openstack.org/#/c/386696/ [1] http://people.redhat.com/~aschultz/os_workers.html >> >> As always we'd love to hear feedback on this and any other issues >> people might be facing. We're always available in #puppet-openstack on >> freenode or via the mailing lists. >> >> Thanks, >> -Alex >> >> >> [0] https://review.openstack.org/#/c/375146/ >> [1] https://review.openstack.org/#/q/topic:os_workers-fact >> [2] https://docs.puppet.com/facter/3.4/custom_facts.html#external-facts >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From serverascode at gmail.com Thu Oct 20 17:25:24 2016 From: serverascode at gmail.com (Curtis) Date: Thu, 20 Oct 2016 11:25:24 -0600 Subject: [Openstack-operators] OpenContrail Users at the BCN Summit? Message-ID: Hi All, Are there any OpenContrail users that will be at the summit next week? If so let me know because I'd like to talk to anyone who would be willing to chat with me about it. :) I searched the schedule a bit but didn't find anything obvious. Might have missed it. Thanks, Curtis. From mwelch at tallshorts.com Thu Oct 20 19:03:08 2016 From: mwelch at tallshorts.com (Matthew Welch) Date: Thu, 20 Oct 2016 15:03:08 -0400 Subject: [Openstack-operators] OpenContrail Users at the BCN Summit? In-Reply-To: References: Message-ID: A few of our team members will be out at the summit next week. Meeting up to chat would be great. - matt On Thu, Oct 20, 2016 at 1:25 PM, Curtis wrote: > Hi All, > > Are there any OpenContrail users that will be at the summit next week? > If so let me know because I'd like to talk to anyone who would be > willing to chat with me about it. :) I searched the schedule a bit but > didn't find anything obvious. Might have missed it. > > Thanks, > Curtis. > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators From mihalis68 at gmail.com Thu Oct 20 21:51:33 2016 From: mihalis68 at gmail.com (Chris Morgan) Date: Thu, 20 Oct 2016 17:51:33 -0400 Subject: [Openstack-operators] 2017 Openstack Operators Mid-Cycle Meetups - venue selection etherpads Message-ID: Hello Everyone, Here are etherpads for the collection of venue hosting proposals and assessment: https://etherpad.openstack.org/p/ops-meetup-venue-discuss-spring-2017 https://etherpad.openstack.org/p/ops-meetup-venue-discuss-aug-2017 For your reference, the previous etherpad (for august 2016 was eventually was decided to be in NYC) was : https://etherpad.openstack.org/p/ops-meetup-venue-discuss -- Chris Morgan -------------- next part -------------- An HTML attachment was scrubbed... URL: From pieter.kruithof.jr at intel.com Fri Oct 21 03:20:01 2016 From: pieter.kruithof.jr at intel.com (Kruithof Jr, Pieter) Date: Fri, 21 Oct 2016 03:20:01 +0000 Subject: [Openstack-operators] Participate in a Usability Study at Barcelona: Get a free bluetooth speaker and OpenStack t-shirt for your time Message-ID: <60FF081C-DBD5-49A7-B624-B2F7583E6D1E@intel.com> Apologies for any cross-postings. Hi Folks, I wanted to send a second notice that there will be two usability studies being conducted in Barcelona with cloud operators. We had nearly a 100% show rate last summit and the results had a direct impact on the OpenStackClient. In fact, the results were shared at the OSC working session the next day. Intel is providing a Philips bluetooth speaker to show our appreciation for your time. In addition, the foundation is providing a t-shirt to each person that participates in the study. I may even give anyone suffering from jetlag a Red Bull to get them through the day. ___ The first study will be on Monday, October 24th and is intended to investigate the current APIs to understand any specific pain points associated with completing tasks that span projects such as quotas. This study will last 45 minutes per operator. You can schedule a time here: http://doodle.com/poll/fwfi2sfcuctxv3u8 Note that you may need to set the time zone in Doodle to Spain > Ceuta ___ The second study will be on Tuesday, October 25th and is intended to investigate the OpenStackClient to understand any specific pain points and opportunities associated with completing tasks with the client. This study will last 45 minutes per operator. We ran a similar study at the previous summit and the feedback from users was that it was a good opportunity to ?test drive? the client with an OSC expert in the room with them. You can schedule a time here: http://doodle.com/poll/894aqsmheaa2mv5a Note that you may need to set the time zone in Doodle to Spain > Ceuta For both studies, someone with the OpenStack UX project will send you a calendar invite after you select a time(s) that are convenient for you. Thanks, Piet Kruithof PTL OpenStack UX -------------- next part -------------- An HTML attachment was scrubbed... URL: From pieter.kruithof.jr at intel.com Fri Oct 21 03:20:30 2016 From: pieter.kruithof.jr at intel.com (Kruithof Jr, Pieter) Date: Fri, 21 Oct 2016 03:20:30 +0000 Subject: [Openstack-operators] What do customers want?: Six studies conducted by the OpenStack UX project on behalf of the community Message-ID: Hi Folks, The OpenStack UX project and Intel have produced three booklets for the Barcelona OpenStack Summit based on the OpenStack UX?s work over the past six months. The first is an overview of user research that was conducted on behalf of the OpenStack community including operator information needs, novice user experience for Horizon, OpenStackClient (OSC) validation and managing quotas at scale. https://drive.google.com/file/d/0B8h-c0zHxYBoXzFMQWJsY09Eclk/view?usp=sharing The second booklets include the OpenStack Personas and GUI Guidelines: OpenStack Personas: https://drive.google.com/file/d/0B8h-c0zHxYBoMXB4UVgtdFFsaDQ/view?usp=sharing OpenStack GUI Guidelines: https://drive.google.com/file/d/0B8h-c0zHxYBoV0tHV1l5bVpZZzg/view?usp=sharing ___ Unfortunately, we were unable to include the results for the searchlight/horizon integration as well as cloud architect information needs. However, the presentations are posted to the OpenStack UX YouTube channel. https://www.youtube.com/channel/UCt6h129lzcjUqLDY005aCxw The channel is updated regularly, so it may be worth checking from time to time. ___ For extra credit, my suggestion would be to read the ?State of OpenStack User Experience October 2016? which provides a succinct overview of the research that was conducted, why it matters, results, and recommendations. https://docs.google.com/presentation/d/1hZYCOADJ1gXiFHT1ahwv8-tDIQCSingu7zqSMbKFZ_Y/edit?usp=sharing Thanks, Piet Kruithof PTL OpenStack UX project -------------- next part -------------- An HTML attachment was scrubbed... URL: From serverascode at gmail.com Fri Oct 21 09:45:18 2016 From: serverascode at gmail.com (Curtis) Date: Fri, 21 Oct 2016 03:45:18 -0600 Subject: [Openstack-operators] OpenContrail Users at the BCN Summit? In-Reply-To: References: Message-ID: Ok, great. I guess we will have to figure out a way to meet next week. :) On Thu, Oct 20, 2016 at 1:03 PM, Matthew Welch wrote: > A few of our team members will be out at the summit next week. > Meeting up to chat would be great. > > - matt > > On Thu, Oct 20, 2016 at 1:25 PM, Curtis wrote: >> Hi All, >> >> Are there any OpenContrail users that will be at the summit next week? >> If so let me know because I'd like to talk to anyone who would be >> willing to chat with me about it. :) I searched the schedule a bit but >> didn't find anything obvious. Might have missed it. >> >> Thanks, >> Curtis. >> >> _______________________________________________ >> OpenStack-operators mailing list >> OpenStack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators -- Blog: serverascode.com From grant at absolutedevops.io Fri Oct 21 12:14:08 2016 From: grant at absolutedevops.io (Grant Morley) Date: Fri, 21 Oct 2016 13:14:08 +0100 Subject: [Openstack-operators] Instances failing to launch when rbd backed (ansible Liberty setup) Message-ID: <5bdc7ae7-a8c2-5652-63f8-f8ddbf17a40f@absolutedevops.io> Hi all, We have a openstack-ansible setup and have ceph installed for the backend. However whenever we try and launch a new instance it fails to launch and we get the following error: 2016-10-21 12:08:06.241 70661 INFO nova.virt.libvirt.driver [req-79811c40-8394-4e33-b16d-ff5fa7341b6a 41c60f65ae914681b6a6ca27a42ff780 324844c815084205995aff10b03a85e1 - - -] [instance: 5633d98e-5f79-4c13-8d45-7544069f0e6f] Creating image 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager [req-79811c40-8394-4e33-b16d-ff5fa7341b6a 41c60f65ae914681b6a6ca27a42ff780 324844c815084205995aff10b03a85e1 - - -] [instance: 5633d98e-5f79-4c13-8d45-7544069f0e6f] Instance failed to spawn 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager [instance: 5633d98e-5f79-4c13-8d45-7544069f0e6f] Traceback (most recent call last): 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager [instance: 5633d98e-5f79-4c13-8d45-7544069f0e6f] File "/openstack/venvs/nova-12.0.16/lib/python2.7/site-packages/nova/compute/manager.py", line 2156, in _build_resources 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager [instance: 5633d98e-5f79-4c13-8d45-7544069f0e6f] yield resources 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager [instance: 5633d98e-5f79-4c13-8d45-7544069f0e6f] File "/openstack/venvs/nova-12.0.16/lib/python2.7/site-packages/nova/compute/manager.py", line 2009, in _build_and_run_instance 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager [instance: 5633d98e-5f79-4c13-8d45-7544069f0e6f] block_device_info=block_device_info) 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager [instance: 5633d98e-5f79-4c13-8d45-7544069f0e6f] File "/openstack/venvs/nova-12.0.16/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2527, in spawn 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager [instance: 5633d98e-5f79-4c13-8d45-7544069f0e6f] admin_pass=admin_password) 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager [instance: 5633d98e-5f79-4c13-8d45-7544069f0e6f] File "/openstack/venvs/nova-12.0.16/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2939, in _create_image 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager [instance: 5633d98e-5f79-4c13-8d45-7544069f0e6f] backend = image('disk') 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager [instance: 5633d98e-5f79-4c13-8d45-7544069f0e6f] File "/openstack/venvs/nova-12.0.16/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2884, in image 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager [instance: 5633d98e-5f79-4c13-8d45-7544069f0e6f] fname + suffix, image_type) 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager [instance: 5633d98e-5f79-4c13-8d45-7544069f0e6f] File "/openstack/venvs/nova-12.0.16/lib/python2.7/site-packages/nova/virt/libvirt/imagebackend.py", line 967, in image 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager [instance: 5633d98e-5f79-4c13-8d45-7544069f0e6f] return backend(instance=instance, disk_name=disk_name) 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager [instance: 5633d98e-5f79-4c13-8d45-7544069f0e6f] File "/openstack/venvs/nova-12.0.16/lib/python2.7/site-packages/nova/virt/libvirt/imagebackend.py", line 748, in __init__ 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager [instance: 5633d98e-5f79-4c13-8d45-7544069f0e6f] rbd_user=self.rbd_user) 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager [instance: 5633d98e-5f79-4c13-8d45-7544069f0e6f] File "/openstack/venvs/nova-12.0.16/lib/python2.7/site-packages/nova/virt/libvirt/storage/rbd_utils.py", line 117, in __init__ 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager [instance: 5633d98e-5f79-4c13-8d45-7544069f0e6f] raise RuntimeError(_('rbd python libraries not found')) 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager [instance: 5633d98e-5f79-4c13-8d45-7544069f0e6f] RuntimeError: rbd python libraries not found It moans about the rbd python libraries not being found, however all of the rbd libraries appear to be installed fine via apt. ( We are running Ubuntu) Compute host packages: dpkg -l | grep ceph ii ceph-common 10.2.3-1trusty amd64 common utilities to mount and interact with a ceph storage cluster ii libcephfs1 10.2.3-1trusty amd64 Ceph distributed file system client library ii python-ceph 10.2.3-1trusty amd64 Meta-package for python libraries for the Ceph libraries ii python-cephfs 10.2.3-1trusty amd64 Python libraries for the Ceph libcephfs library dpkg -l | grep rbd ii librbd1 10.2.3-1trusty amd64 RADOS block device client library ii python-rbd 10.2.3-1trusty amd64 Python libraries for the Ceph librbd library Has anyone come across this before? Ceph is working fine for Glance, it just seems to be with the nova compute hosts. Many thanks, -- Grant Morley Cloud Lead Absolute DevOps Ltd Units H, J & K, Gateway 1000, Whittle Way, Stevenage, Herts, SG1 2FP www.absolutedevops.io grant at absolutedevops.io 0845 874 0580 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ado_new.png Type: image/png Size: 4369 bytes Desc: not available URL: From csargiso at gmail.com Fri Oct 21 13:09:50 2016 From: csargiso at gmail.com (Chris Sarginson) Date: Fri, 21 Oct 2016 13:09:50 +0000 Subject: [Openstack-operators] Instances failing to launch when rbd backed (ansible Liberty setup) In-Reply-To: <5bdc7ae7-a8c2-5652-63f8-f8ddbf17a40f@absolutedevops.io> References: <5bdc7ae7-a8c2-5652-63f8-f8ddbf17a40f@absolutedevops.io> Message-ID: It seems like it may be an occurrence of this bug, as you look to be using python venvs: https://bugs.launchpad.net/openstack-ansible/+bug/1509837 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager [instance: 5633d98e-5f79-4c13-8d45-7544069f0e6f] File "*/openstack/venvs/*nova-12.0. 16/lib/python2.7/site-packages/nova/virt/libvirt/storage/rbd_utils.py", line 117, in __init__ Chris On Fri, 21 Oct 2016 at 13:19 Grant Morley wrote: > Hi all, > > We have a openstack-ansible setup and have ceph installed for the backend. > However whenever we try and launch a new instance it fails to launch and we > get the following error: > > 2016-10-21 12:08:06.241 70661 INFO nova.virt.libvirt.driver > [req-79811c40-8394-4e33-b16d-ff5fa7341b6a 41c60f65ae914681b6a6ca27a42ff780 > 324844c815084205995aff10b03a85e1 - - -] [instance: > 5633d98e-5f79-4c13-8d45-7544069f0e6f] Creating image > 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager > [req-79811c40-8394-4e33-b16d-ff5fa7341b6a 41c60f65ae914681b6a6ca27a42ff780 > 324844c815084205995aff10b03a85e1 - - -] [instance: > 5633d98e-5f79-4c13-8d45-7544069f0e6f] Instance failed to spawn > 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager [instance: > 5633d98e-5f79-4c13-8d45-7544069f0e6f] Traceback (most recent call last): > 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager [instance: > 5633d98e-5f79-4c13-8d45-7544069f0e6f] File > "/openstack/venvs/nova-12.0.16/lib/python2.7/site-packages/nova/compute/manager.py", > line 2156, in _build_resources > 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager [instance: > 5633d98e-5f79-4c13-8d45-7544069f0e6f] yield resources > 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager [instance: > 5633d98e-5f79-4c13-8d45-7544069f0e6f] File > "/openstack/venvs/nova-12.0.16/lib/python2.7/site-packages/nova/compute/manager.py", > line 2009, in _build_and_run_instance > 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager [instance: > 5633d98e-5f79-4c13-8d45-7544069f0e6f] > block_device_info=block_device_info) > 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager [instance: > 5633d98e-5f79-4c13-8d45-7544069f0e6f] File > "/openstack/venvs/nova-12.0.16/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", > line 2527, in spawn > 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager [instance: > 5633d98e-5f79-4c13-8d45-7544069f0e6f] admin_pass=admin_password) > 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager [instance: > 5633d98e-5f79-4c13-8d45-7544069f0e6f] File > "/openstack/venvs/nova-12.0.16/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", > line 2939, in _create_image > 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager [instance: > 5633d98e-5f79-4c13-8d45-7544069f0e6f] backend = image('disk') > 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager [instance: > 5633d98e-5f79-4c13-8d45-7544069f0e6f] File > "/openstack/venvs/nova-12.0.16/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", > line 2884, in image > 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager [instance: > 5633d98e-5f79-4c13-8d45-7544069f0e6f] fname + suffix, image_type) > 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager [instance: > 5633d98e-5f79-4c13-8d45-7544069f0e6f] File > "/openstack/venvs/nova-12.0.16/lib/python2.7/site-packages/nova/virt/libvirt/imagebackend.py", > line 967, in image > 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager [instance: > 5633d98e-5f79-4c13-8d45-7544069f0e6f] return backend(instance=instance, > disk_name=disk_name) > 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager [instance: > 5633d98e-5f79-4c13-8d45-7544069f0e6f] File > "/openstack/venvs/nova-12.0.16/lib/python2.7/site-packages/nova/virt/libvirt/imagebackend.py", > line 748, in __init__ > 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager [instance: > 5633d98e-5f79-4c13-8d45-7544069f0e6f] rbd_user=self.rbd_user) > 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager [instance: > 5633d98e-5f79-4c13-8d45-7544069f0e6f] File > "/openstack/venvs/nova-12.0.16/lib/python2.7/site-packages/nova/virt/libvirt/storage/rbd_utils.py", > line 117, in __init__ > 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager [instance: > 5633d98e-5f79-4c13-8d45-7544069f0e6f] raise RuntimeError(_('rbd python > libraries not found')) > 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager [instance: > 5633d98e-5f79-4c13-8d45-7544069f0e6f] RuntimeError: rbd python libraries > not found > > It moans about the rbd python libraries not being found, however all of > the rbd libraries appear to be installed fine via apt. ( We are running > Ubuntu) > > Compute host packages: > > dpkg -l | grep ceph > ii ceph-common > 10.2.3-1trusty amd64 common utilities to > mount and interact with a ceph storage cluster > ii libcephfs1 > 10.2.3-1trusty amd64 Ceph distributed file > system client library > ii python-ceph > 10.2.3-1trusty amd64 Meta-package for python > libraries for the Ceph libraries > ii python-cephfs > 10.2.3-1trusty amd64 Python libraries for the > Ceph libcephfs library > > dpkg -l | grep rbd > ii librbd1 > 10.2.3-1trusty amd64 RADOS block device > client library > ii python-rbd > 10.2.3-1trusty amd64 Python libraries for the > Ceph librbd library > > Has anyone come across this before? Ceph is working fine for Glance, it > just seems to be with the nova compute hosts. > > Many thanks, > > -- > Grant Morley > Cloud Lead > Absolute DevOps Ltd > Units H, J & K, Gateway 1000, Whittle Way, Stevenage, Herts, SG1 2FP > www.absolutedevops.io grant at absolutedevops.io 0845 > 874 0580 > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ado_new.png Type: image/png Size: 4369 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ado_new.png Type: image/png Size: 4369 bytes Desc: not available URL: From mwelch at tallshorts.com Fri Oct 21 14:15:06 2016 From: mwelch at tallshorts.com (Matthew Welch) Date: Fri, 21 Oct 2016 10:15:06 -0400 Subject: [Openstack-operators] OpenContrail Users at the BCN Summit? In-Reply-To: References: Message-ID: Sounds good. Are you going to the Contrail User Group Meeting? - https://www.eventbrite.com/e/opencontrail-user-group-meeting-barcelona-2016-tickets-27805549146 - Matt On Fri, Oct 21, 2016 at 5:45 AM, Curtis wrote: > Ok, great. I guess we will have to figure out a way to meet next week. :) > > On Thu, Oct 20, 2016 at 1:03 PM, Matthew Welch wrote: >> A few of our team members will be out at the summit next week. >> Meeting up to chat would be great. >> >> - matt >> >> On Thu, Oct 20, 2016 at 1:25 PM, Curtis wrote: >>> Hi All, >>> >>> Are there any OpenContrail users that will be at the summit next week? >>> If so let me know because I'd like to talk to anyone who would be >>> willing to chat with me about it. :) I searched the schedule a bit but >>> didn't find anything obvious. Might have missed it. >>> >>> Thanks, >>> Curtis. >>> >>> _______________________________________________ >>> OpenStack-operators mailing list >>> OpenStack-operators at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > > -- > Blog: serverascode.com From grant at absolutedevops.io Fri Oct 21 14:23:21 2016 From: grant at absolutedevops.io (Grant Morley) Date: Fri, 21 Oct 2016 15:23:21 +0100 Subject: [Openstack-operators] Instances failing to launch when rbd backed (ansible Liberty setup) In-Reply-To: References: <5bdc7ae7-a8c2-5652-63f8-f8ddbf17a40f@absolutedevops.io> Message-ID: <70021dc5-00aa-d08d-bae0-bf4285ee7c43@absolutedevops.io> Hi Chris, That bug suggests there has been a fix in 12.0.8. Hopefully 12.0.16 should have that fixed still with a bit of luck , unless the fix hasn't been carried over. I can see that the bug report says there is also a fix 12.0.9 and 12.0.11. Do you know if there is a workaround we can try for this? As ideally I don't want to be downgrading the openstack_ansible version. Regards, On 21/10/16 14:09, Chris Sarginson wrote: > It seems like it may be an occurrence of this bug, as you look to be > using python venvs: > > https://bugs.launchpad.net/openstack-ansible/+bug/1509837 > > 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager [instance: > 5633d98e-5f79-4c13-8d45-7544069f0e6f] File > "*/openstack/venvs/*nova-12.0.16/lib/python2.7/site-packages/nova/virt/libvirt/storage/rbd_utils.py", > line 117, in __init__ > > Chris > > On Fri, 21 Oct 2016 at 13:19 Grant Morley > wrote: > > Hi all, > > We have a openstack-ansible setup and have ceph installed for the > backend. However whenever we try and launch a new instance it > fails to launch and we get the following error: > > 2016-10-21 12:08:06.241 70661 INFO nova.virt.libvirt.driver > [req-79811c40-8394-4e33-b16d-ff5fa7341b6a > 41c60f65ae914681b6a6ca27a42ff780 324844c815084205995aff10b03a85e1 > - - -] [instance: 5633d98e-5f79-4c13-8d45-7544069f0e6f] Creating image > 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager > [req-79811c40-8394-4e33-b16d-ff5fa7341b6a > 41c60f65ae914681b6a6ca27a42ff780 324844c815084205995aff10b03a85e1 > - - -] [instance: 5633d98e-5f79-4c13-8d45-7544069f0e6f] Instance > failed to spawn > 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager > [instance: 5633d98e-5f79-4c13-8d45-7544069f0e6f] Traceback (most > recent call last): > 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager > [instance: 5633d98e-5f79-4c13-8d45-7544069f0e6f] File > "/openstack/venvs/nova-12.0.16/lib/python2.7/site-packages/nova/compute/manager.py", > line 2156, in _build_resources > 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager > [instance: 5633d98e-5f79-4c13-8d45-7544069f0e6f] yield resources > 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager > [instance: 5633d98e-5f79-4c13-8d45-7544069f0e6f] File > "/openstack/venvs/nova-12.0.16/lib/python2.7/site-packages/nova/compute/manager.py", > line 2009, in _build_and_run_instance > 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager > [instance: 5633d98e-5f79-4c13-8d45-7544069f0e6f] > block_device_info=block_device_info) > 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager > [instance: 5633d98e-5f79-4c13-8d45-7544069f0e6f] File > "/openstack/venvs/nova-12.0.16/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", > line 2527, in spawn > 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager > [instance: 5633d98e-5f79-4c13-8d45-7544069f0e6f] > admin_pass=admin_password) > 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager > [instance: 5633d98e-5f79-4c13-8d45-7544069f0e6f] File > "/openstack/venvs/nova-12.0.16/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", > line 2939, in _create_image > 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager > [instance: 5633d98e-5f79-4c13-8d45-7544069f0e6f] backend = > image('disk') > 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager > [instance: 5633d98e-5f79-4c13-8d45-7544069f0e6f] File > "/openstack/venvs/nova-12.0.16/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", > line 2884, in image > 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager > [instance: 5633d98e-5f79-4c13-8d45-7544069f0e6f] fname + > suffix, image_type) > 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager > [instance: 5633d98e-5f79-4c13-8d45-7544069f0e6f] File > "/openstack/venvs/nova-12.0.16/lib/python2.7/site-packages/nova/virt/libvirt/imagebackend.py", > line 967, in image > 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager > [instance: 5633d98e-5f79-4c13-8d45-7544069f0e6f] return > backend(instance=instance, disk_name=disk_name) > 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager > [instance: 5633d98e-5f79-4c13-8d45-7544069f0e6f] File > "/openstack/venvs/nova-12.0.16/lib/python2.7/site-packages/nova/virt/libvirt/imagebackend.py", > line 748, in __init__ > 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager > [instance: 5633d98e-5f79-4c13-8d45-7544069f0e6f] > rbd_user=self.rbd_user) > 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager > [instance: 5633d98e-5f79-4c13-8d45-7544069f0e6f] File > "/openstack/venvs/nova-12.0.16/lib/python2.7/site-packages/nova/virt/libvirt/storage/rbd_utils.py", > line 117, in __init__ > 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager > [instance: 5633d98e-5f79-4c13-8d45-7544069f0e6f] raise > RuntimeError(_('rbd python libraries not found')) > 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager > [instance: 5633d98e-5f79-4c13-8d45-7544069f0e6f] RuntimeError: rbd > python libraries not found > > It moans about the rbd python libraries not being found, however > all of the rbd libraries appear to be installed fine via apt. ( We > are running Ubuntu) > > Compute host packages: > > dpkg -l | grep ceph > ii ceph-common 10.2.3-1trusty amd64 > common utilities to mount and interact with a ceph storage cluster > ii libcephfs1 10.2.3-1trusty amd64 > Ceph distributed file system client library > ii python-ceph 10.2.3-1trusty amd64 > Meta-package for python libraries for the Ceph libraries > ii python-cephfs 10.2.3-1trusty > amd64 Python libraries for the Ceph libcephfs library > > dpkg -l | grep rbd > ii librbd1 10.2.3-1trusty amd64 > RADOS block device client library > ii python-rbd 10.2.3-1trusty amd64 > Python libraries for the Ceph librbd library > > Has anyone come across this before? Ceph is working fine for > Glance, it just seems to be with the nova compute hosts. > > Many thanks, > > > -- > Grant Morley > Cloud Lead > Absolute DevOps Ltd > Units H, J & K, Gateway 1000, Whittle Way, Stevenage, Herts, SG1 2FP > www.absolutedevops.io > grant at absolutedevops.io 0845 874 0580 > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -- Grant Morley Cloud Lead Absolute DevOps Ltd Units H, J & K, Gateway 1000, Whittle Way, Stevenage, Herts, SG1 2FP www.absolutedevops.io grant at absolutedevops.io 0845 874 0580 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ado_new.png Type: image/png Size: 4369 bytes Desc: not available URL: From Jesse.Pretorius at rackspace.co.uk Fri Oct 21 14:38:55 2016 From: Jesse.Pretorius at rackspace.co.uk (Jesse Pretorius) Date: Fri, 21 Oct 2016 14:38:55 +0000 Subject: [Openstack-operators] Instances failing to launch when rbd backed (ansible Liberty setup) In-Reply-To: <70021dc5-00aa-d08d-bae0-bf4285ee7c43@absolutedevops.io> References: <5bdc7ae7-a8c2-5652-63f8-f8ddbf17a40f@absolutedevops.io> <70021dc5-00aa-d08d-bae0-bf4285ee7c43@absolutedevops.io> Message-ID: <8BD67077-2815-4813-A2A3-82952DD76FA0@rackspace.co.uk> Hi Grant/Chris, Can you verify whether there is a symlink from inside the venv to the location of the RBD python libs in the system packages? If there isn?t one then you should be able to re-execute the os-nova-install.yml playbook as it gets executed here: https://github.com/openstack/openstack-ansible/blob/liberty/playbooks/roles/os_nova/tasks/nova_compute_kvm_install.yml#L103-L123 If you can file a bug with some information regarding what actions led you to get to this point it?d be great for us to try and figure out what went wrong so that we can improve things for the future. Thanks, Jesse IRC: odyssey4me From: Grant Morley Date: Friday, October 21, 2016 at 3:23 PM To: Chris Sarginson , OpenStack Operators Cc: "ian.banks at serverchoice.com" Subject: Re: [Openstack-operators] Instances failing to launch when rbd backed (ansible Liberty setup) Hi Chris, That bug suggests there has been a fix in 12.0.8. Hopefully 12.0.16 should have that fixed still with a bit of luck , unless the fix hasn't been carried over. I can see that the bug report says there is also a fix 12.0.9 and 12.0.11. Do you know if there is a workaround we can try for this? As ideally I don't want to be downgrading the openstack_ansible version. Regards, On 21/10/16 14:09, Chris Sarginson wrote: It seems like it may be an occurrence of this bug, as you look to be using python venvs: https://bugs.launchpad.net/openstack-ansible/+bug/1509837 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager [instance: 5633d98e-5f79-4c13-8d45-7544069f0e6f] File "/openstack/venvs/nova-12.0.16/lib/python2.7/site-packages/nova/virt/libvirt/storage/rbd_utils.py", line 117, in __init__ Chris On Fri, 21 Oct 2016 at 13:19 Grant Morley > wrote: Hi all, We have a openstack-ansible setup and have ceph installed for the backend. However whenever we try and launch a new instance it fails to launch and we get the following error: 2016-10-21 12:08:06.241 70661 INFO nova.virt.libvirt.driver [req-79811c40-8394-4e33-b16d-ff5fa7341b6a 41c60f65ae914681b6a6ca27a42ff780 324844c815084205995aff10b03a85e1 - - -] [instance: 5633d98e-5f79-4c13-8d45-7544069f0e6f] Creating image 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager [req-79811c40-8394-4e33-b16d-ff5fa7341b6a 41c60f65ae914681b6a6ca27a42ff780 324844c815084205995aff10b03a85e1 - - -] [instance: 5633d98e-5f79-4c13-8d45-7544069f0e6f] Instance failed to spawn 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager [instance: 5633d98e-5f79-4c13-8d45-7544069f0e6f] Traceback (most recent call last): 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager [instance: 5633d98e-5f79-4c13-8d45-7544069f0e6f] File "/openstack/venvs/nova-12.0.16/lib/python2.7/site-packages/nova/compute/manager.py", line 2156, in _build_resources 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager [instance: 5633d98e-5f79-4c13-8d45-7544069f0e6f] yield resources 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager [instance: 5633d98e-5f79-4c13-8d45-7544069f0e6f] File "/openstack/venvs/nova-12.0.16/lib/python2.7/site-packages/nova/compute/manager.py", line 2009, in _build_and_run_instance 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager [instance: 5633d98e-5f79-4c13-8d45-7544069f0e6f] block_device_info=block_device_info) 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager [instance: 5633d98e-5f79-4c13-8d45-7544069f0e6f] File "/openstack/venvs/nova-12.0.16/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2527, in spawn 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager [instance: 5633d98e-5f79-4c13-8d45-7544069f0e6f] admin_pass=admin_password) 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager [instance: 5633d98e-5f79-4c13-8d45-7544069f0e6f] File "/openstack/venvs/nova-12.0.16/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2939, in _create_image 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager [instance: 5633d98e-5f79-4c13-8d45-7544069f0e6f] backend = image('disk') 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager [instance: 5633d98e-5f79-4c13-8d45-7544069f0e6f] File "/openstack/venvs/nova-12.0.16/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2884, in image 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager [instance: 5633d98e-5f79-4c13-8d45-7544069f0e6f] fname + suffix, image_type) 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager [instance: 5633d98e-5f79-4c13-8d45-7544069f0e6f] File "/openstack/venvs/nova-12.0.16/lib/python2.7/site-packages/nova/virt/libvirt/imagebackend.py", line 967, in image 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager [instance: 5633d98e-5f79-4c13-8d45-7544069f0e6f] return backend(instance=instance, disk_name=disk_name) 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager [instance: 5633d98e-5f79-4c13-8d45-7544069f0e6f] File "/openstack/venvs/nova-12.0.16/lib/python2.7/site-packages/nova/virt/libvirt/imagebackend.py", line 748, in __init__ 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager [instance: 5633d98e-5f79-4c13-8d45-7544069f0e6f] rbd_user=self.rbd_user) 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager [instance: 5633d98e-5f79-4c13-8d45-7544069f0e6f] File "/openstack/venvs/nova-12.0.16/lib/python2.7/site-packages/nova/virt/libvirt/storage/rbd_utils.py", line 117, in __init__ 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager [instance: 5633d98e-5f79-4c13-8d45-7544069f0e6f] raise RuntimeError(_('rbd python libraries not found')) 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager [instance: 5633d98e-5f79-4c13-8d45-7544069f0e6f] RuntimeError: rbd python libraries not found It moans about the rbd python libraries not being found, however all of the rbd libraries appear to be installed fine via apt. ( We are running Ubuntu) Compute host packages: dpkg -l | grep ceph ii ceph-common 10.2.3-1trusty amd64 common utilities to mount and interact with a ceph storage cluster ii libcephfs1 10.2.3-1trusty amd64 Ceph distributed file system client library ii python-ceph 10.2.3-1trusty amd64 Meta-package for python libraries for the Ceph libraries ii python-cephfs 10.2.3-1trusty amd64 Python libraries for the Ceph libcephfs library dpkg -l | grep rbd ii librbd1 10.2.3-1trusty amd64 RADOS block device client library ii python-rbd 10.2.3-1trusty amd64 Python libraries for the Ceph librbd library Has anyone come across this before? Ceph is working fine for Glance, it just seems to be with the nova compute hosts. Many thanks, -- [cid:part1.9296627C.98B6FDF0 at absolutedevops.io] Grant Morley Cloud Lead Absolute DevOps Ltd Units H, J & K, Gateway 1000, Whittle Way, Stevenage, Herts, SG1 2FP www.absolutedevops.io grant at absolutedevops.io 0845 874 0580 _______________________________________________ OpenStack-operators mailing list OpenStack-operators at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators -- [cid:image001.png at 01D22BB1.3B2EE320] Grant Morley Cloud Lead Absolute DevOps Ltd Units H, J & K, Gateway 1000, Whittle Way, Stevenage, Herts, SG1 2FP www.absolutedevops.io grant at absolutedevops.io 0845 874 0580 ________________________________ Rackspace Limited is a company registered in England & Wales (company registered number 03897010) whose registered office is at 5 Millington Road, Hyde Park Hayes, Middlesex UB3 4AZ. Rackspace Limited privacy policy can be viewed at www.rackspace.co.uk/legal/privacy-policy - This e-mail message may contain confidential or privileged information intended for the recipient. Any dissemination, distribution or copying of the enclosed material is prohibited. If you receive this transmission in error, please notify us immediately by e-mail at abuse at rackspace.com and delete the original message. Your cooperation is appreciated. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 4370 bytes Desc: image001.png URL: From klindgren at godaddy.com Fri Oct 21 15:42:42 2016 From: klindgren at godaddy.com (Kris G. Lindgren) Date: Fri, 21 Oct 2016 15:42:42 +0000 Subject: [Openstack-operators] Instances failing to launch when rbd backed (ansible Liberty setup) In-Reply-To: <5bdc7ae7-a8c2-5652-63f8-f8ddbf17a40f@absolutedevops.io> References: <5bdc7ae7-a8c2-5652-63f8-f8ddbf17a40f@absolutedevops.io> Message-ID: <70B7B181-7A96-449F-8249-332159D64D43@godaddy.com> From the traceback it looks like nova-compute is running out of a venv. You need to activate the venv, most likely via: source /openstack/venvs/nova-12.0.16/.venv/bin/activate then run: pip freeze. If you don?t see the RBD stuff ? then that is your issue. You might be able to fix via: pip install rbd. Venv?s are self-contained python installs, so they do not use the system level python packages at all. I would also ask for some help in the #openstack-ansible channel on irc as well. ___________________________________________________________________ Kris Lindgren Senior Linux Systems Engineer GoDaddy From: Grant Morley Date: Friday, October 21, 2016 at 6:14 AM To: OpenStack Operators Cc: "ian.banks at serverchoice.com" Subject: [Openstack-operators] Instances failing to launch when rbd backed (ansible Liberty setup) Hi all, We have a openstack-ansible setup and have ceph installed for the backend. However whenever we try and launch a new instance it fails to launch and we get the following error: 2016-10-21 12:08:06.241 70661 INFO nova.virt.libvirt.driver [req-79811c40-8394-4e33-b16d-ff5fa7341b6a 41c60f65ae914681b6a6ca27a42ff780 324844c815084205995aff10b03a85e1 - - -] [instance: 5633d98e-5f79-4c13-8d45-7544069f0e6f] Creating image 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager [req-79811c40-8394-4e33-b16d-ff5fa7341b6a 41c60f65ae914681b6a6ca27a42ff780 324844c815084205995aff10b03a85e1 - - -] [instance: 5633d98e-5f79-4c13-8d45-7544069f0e6f] Instance failed to spawn 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager [instance: 5633d98e-5f79-4c13-8d45-7544069f0e6f] Traceback (most recent call last): 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager [instance: 5633d98e-5f79-4c13-8d45-7544069f0e6f] File "/openstack/venvs/nova-12.0.16/lib/python2.7/site-packages/nova/compute/manager.py", line 2156, in _build_resources 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager [instance: 5633d98e-5f79-4c13-8d45-7544069f0e6f] yield resources 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager [instance: 5633d98e-5f79-4c13-8d45-7544069f0e6f] File "/openstack/venvs/nova-12.0.16/lib/python2.7/site-packages/nova/compute/manager.py", line 2009, in _build_and_run_instance 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager [instance: 5633d98e-5f79-4c13-8d45-7544069f0e6f] block_device_info=block_device_info) 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager [instance: 5633d98e-5f79-4c13-8d45-7544069f0e6f] File "/openstack/venvs/nova-12.0.16/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2527, in spawn 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager [instance: 5633d98e-5f79-4c13-8d45-7544069f0e6f] admin_pass=admin_password) 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager [instance: 5633d98e-5f79-4c13-8d45-7544069f0e6f] File "/openstack/venvs/nova-12.0.16/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2939, in _create_image 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager [instance: 5633d98e-5f79-4c13-8d45-7544069f0e6f] backend = image('disk') 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager [instance: 5633d98e-5f79-4c13-8d45-7544069f0e6f] File "/openstack/venvs/nova-12.0.16/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2884, in image 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager [instance: 5633d98e-5f79-4c13-8d45-7544069f0e6f] fname + suffix, image_type) 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager [instance: 5633d98e-5f79-4c13-8d45-7544069f0e6f] File "/openstack/venvs/nova-12.0.16/lib/python2.7/site-packages/nova/virt/libvirt/imagebackend.py", line 967, in image 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager [instance: 5633d98e-5f79-4c13-8d45-7544069f0e6f] return backend(instance=instance, disk_name=disk_name) 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager [instance: 5633d98e-5f79-4c13-8d45-7544069f0e6f] File "/openstack/venvs/nova-12.0.16/lib/python2.7/site-packages/nova/virt/libvirt/imagebackend.py", line 748, in __init__ 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager [instance: 5633d98e-5f79-4c13-8d45-7544069f0e6f] rbd_user=self.rbd_user) 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager [instance: 5633d98e-5f79-4c13-8d45-7544069f0e6f] File "/openstack/venvs/nova-12.0.16/lib/python2.7/site-packages/nova/virt/libvirt/storage/rbd_utils.py", line 117, in __init__ 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager [instance: 5633d98e-5f79-4c13-8d45-7544069f0e6f] raise RuntimeError(_('rbd python libraries not found')) 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager [instance: 5633d98e-5f79-4c13-8d45-7544069f0e6f] RuntimeError: rbd python libraries not found It moans about the rbd python libraries not being found, however all of the rbd libraries appear to be installed fine via apt. ( We are running Ubuntu) Compute host packages: dpkg -l | grep ceph ii ceph-common 10.2.3-1trusty amd64 common utilities to mount and interact with a ceph storage cluster ii libcephfs1 10.2.3-1trusty amd64 Ceph distributed file system client library ii python-ceph 10.2.3-1trusty amd64 Meta-package for python libraries for the Ceph libraries ii python-cephfs 10.2.3-1trusty amd64 Python libraries for the Ceph libcephfs library dpkg -l | grep rbd ii librbd1 10.2.3-1trusty amd64 RADOS block device client library ii python-rbd 10.2.3-1trusty amd64 Python libraries for the Ceph librbd library Has anyone come across this before? Ceph is working fine for Glance, it just seems to be with the nova compute hosts. Many thanks, -- [cid:image001.png at 01D22B7F.7756BDA0] Grant Morley Cloud Lead Absolute DevOps Ltd Units H, J & K, Gateway 1000, Whittle Way, Stevenage, Herts, SG1 2FP www.absolutedevops.io grant at absolutedevops.io 0845 874 0580 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 4370 bytes Desc: image001.png URL: From edgar.magana at workday.com Fri Oct 21 15:43:20 2016 From: edgar.magana at workday.com (Edgar Magana) Date: Fri, 21 Oct 2016 15:43:20 +0000 Subject: [Openstack-operators] OpenContrail Users at the BCN Summit? In-Reply-To: References: Message-ID: Curtis, I will be attending the summit. We have OpenContrail in production in multiple data centers. I am happy to share our experiences with you and your team. The OpenContrail User Group is also a good forum to get in touch in other operators. This time I will be the host. So, it will be a bit different with a lot of more collaboration. https://www.eventbrite.com/e/opencontrail-user-group-meeting-barcelona-2016-tickets-27805549146 I am also moderating a networking Birds of a Feather session: https://www.openstack.org/summit/barcelona-2016/summit-schedule/events/16801/openstack-networking-neutron-and-beyond-experiences-and-best-practices-bof Join us! Edgar On 10/20/16, 6:25 PM, "Curtis" wrote: Hi All, Are there any OpenContrail users that will be at the summit next week? If so let me know because I'd like to talk to anyone who would be willing to chat with me about it. :) I searched the schedule a bit but didn't find anything obvious. Might have missed it. Thanks, Curtis. _______________________________________________ OpenStack-operators mailing list OpenStack-operators at lists.openstack.org https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DQICAg&c=DS6PUFBBr_KiLo7Sjt3ljp5jaW5k2i9ijVXllEdOozc&r=G0XRJfDQsuBvqa_wpWyDAUlSpeMV4W1qfWqBfctlWwQ&m=mTD3RgUjTn4QCcJpzFptvQ7W28GNEHSw-mSx21izqLI&s=jLAkZF7gA-QAfMfZ0q0jAs4dYfYJLZV90K3czuaPs-k&e= From grant at absolutedevops.io Fri Oct 21 15:49:09 2016 From: grant at absolutedevops.io (Grant Morley) Date: Fri, 21 Oct 2016 16:49:09 +0100 Subject: [Openstack-operators] Instances failing to launch when rbd backed (ansible Liberty setup) In-Reply-To: <70B7B181-7A96-449F-8249-332159D64D43@godaddy.com> References: <5bdc7ae7-a8c2-5652-63f8-f8ddbf17a40f@absolutedevops.io> <70B7B181-7A96-449F-8249-332159D64D43@godaddy.com> Message-ID: <844ff3e9-daca-059d-0ac2-32f309f820b1@absolutedevops.io> Thanks Kris, I have run the commands as suggested and there is no rbd installed. However when we try to manually install rbd with pip we get the following error: pip install rbd DEPRECATION: --allow-all-external has been deprecated and will be removed in the future. Due to changes in the repository protocol, it no longer has any effect. Ignoring indexes: https://pypi.python.org/simple Collecting rbd Could not find a version that satisfies the requirement rbd (from versions: ) No matching distribution found for rbd I assume the playbooks are coming across the same issue which is why we are having this problem. I will also ask in the #openstack-ansible channel for some help. Pip is using the local repo servers that are created by openstack-ansible (I assume) so looking at this, it appears they don't have the correct packages. Regards, On 21/10/16 16:42, Kris G. Lindgren wrote: > > From the traceback it looks like nova-compute is running out of a venv. > > You need to activate the venv, most likely via: source > /openstack/venvs/nova-12.0.16/.venv/bin/activate then run: pip > freeze. If you don?t see the RBD stuff ? then that is your issue. > You might be able to fix via: pip install rbd. > > Venv?s are self-contained python installs, so they do not use the > system level python packages at all. > > I would also ask for some help in the #openstack-ansible channel on > irc as well. > > ___________________________________________________________________ > > Kris Lindgren > > Senior Linux Systems Engineer > > GoDaddy > > *From: *Grant Morley > *Date: *Friday, October 21, 2016 at 6:14 AM > *To: *OpenStack Operators > *Cc: *"ian.banks at serverchoice.com" > *Subject: *[Openstack-operators] Instances failing to launch when rbd > backed (ansible Liberty setup) > > Hi all, > > We have a openstack-ansible setup and have ceph installed for the > backend. However whenever we try and launch a new instance it fails to > launch and we get the following error: > > 2016-10-21 12:08:06.241 70661 INFO nova.virt.libvirt.driver > [req-79811c40-8394-4e33-b16d-ff5fa7341b6a > 41c60f65ae914681b6a6ca27a42ff780 324844c815084205995aff10b03a85e1 - - > -] [instance: 5633d98e-5f79-4c13-8d45-7544069f0e6f] Creating image > 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager > [req-79811c40-8394-4e33-b16d-ff5fa7341b6a > 41c60f65ae914681b6a6ca27a42ff780 324844c815084205995aff10b03a85e1 - - > -] [instance: 5633d98e-5f79-4c13-8d45-7544069f0e6f] Instance failed to > spawn > 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager [instance: > 5633d98e-5f79-4c13-8d45-7544069f0e6f] Traceback (most recent call last): > 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager [instance: > 5633d98e-5f79-4c13-8d45-7544069f0e6f] File > "/openstack/venvs/nova-12.0.16/lib/python2.7/site-packages/nova/compute/manager.py", > line 2156, in _build_resources > 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager [instance: > 5633d98e-5f79-4c13-8d45-7544069f0e6f] yield resources > 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager [instance: > 5633d98e-5f79-4c13-8d45-7544069f0e6f] File > "/openstack/venvs/nova-12.0.16/lib/python2.7/site-packages/nova/compute/manager.py", > line 2009, in _build_and_run_instance > 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager [instance: > 5633d98e-5f79-4c13-8d45-7544069f0e6f] block_device_info=block_device_info) > 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager [instance: > 5633d98e-5f79-4c13-8d45-7544069f0e6f] File > "/openstack/venvs/nova-12.0.16/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", > line 2527, in spawn > 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager [instance: > 5633d98e-5f79-4c13-8d45-7544069f0e6f] admin_pass=admin_password) > 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager [instance: > 5633d98e-5f79-4c13-8d45-7544069f0e6f] File > "/openstack/venvs/nova-12.0.16/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", > line 2939, in _create_image > 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager [instance: > 5633d98e-5f79-4c13-8d45-7544069f0e6f] backend = image('disk') > 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager [instance: > 5633d98e-5f79-4c13-8d45-7544069f0e6f] File > "/openstack/venvs/nova-12.0.16/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", > line 2884, in image > 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager [instance: > 5633d98e-5f79-4c13-8d45-7544069f0e6f] fname + suffix, image_type) > 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager [instance: > 5633d98e-5f79-4c13-8d45-7544069f0e6f] File > "/openstack/venvs/nova-12.0.16/lib/python2.7/site-packages/nova/virt/libvirt/imagebackend.py", > line 967, in image > 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager [instance: > 5633d98e-5f79-4c13-8d45-7544069f0e6f] return > backend(instance=instance, disk_name=disk_name) > 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager [instance: > 5633d98e-5f79-4c13-8d45-7544069f0e6f] File > "/openstack/venvs/nova-12.0.16/lib/python2.7/site-packages/nova/virt/libvirt/imagebackend.py", > line 748, in __init__ > 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager [instance: > 5633d98e-5f79-4c13-8d45-7544069f0e6f] rbd_user=self.rbd_user) > 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager [instance: > 5633d98e-5f79-4c13-8d45-7544069f0e6f] File > "/openstack/venvs/nova-12.0.16/lib/python2.7/site-packages/nova/virt/libvirt/storage/rbd_utils.py", > line 117, in __init__ > 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager [instance: > 5633d98e-5f79-4c13-8d45-7544069f0e6f] raise RuntimeError(_('rbd > python libraries not found')) > 2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager [instance: > 5633d98e-5f79-4c13-8d45-7544069f0e6f] RuntimeError: rbd python > libraries not found > > It moans about the rbd python libraries not being found, however all > of the rbd libraries appear to be installed fine via apt. ( We are > running Ubuntu) > > Compute host packages: > > dpkg -l | grep ceph > ii ceph-common 10.2.3-1trusty amd64 > common utilities to mount and interact with a ceph storage cluster > ii libcephfs1 10.2.3-1trusty amd64 Ceph > distributed file system client library > ii python-ceph 10.2.3-1trusty amd64 > Meta-package for python libraries for the Ceph libraries > ii python-cephfs 10.2.3-1trusty amd64 > Python libraries for the Ceph libcephfs library > > dpkg -l | grep rbd > ii librbd1 10.2.3-1trusty amd64 RADOS > block device client library > ii python-rbd 10.2.3-1trusty amd64 > Python libraries for the Ceph librbd library > > Has anyone come across this before? Ceph is working fine for Glance, > it just seems to be with the nova compute hosts. > > Many thanks, > > -- > > Grant Morley > > Cloud Lead > > AbsoluteDevOps Ltd > Units H, J & K, Gateway 1000, Whittle Way, Stevenage, Herts, SG1 2FP > > www.absolutedevops.io > grant at absolutedevops.io 0845 874 0580 > -- Grant Morley Cloud Lead Absolute DevOps Ltd Units H, J & K, Gateway 1000, Whittle Way, Stevenage, Herts, SG1 2FP www.absolutedevops.io grant at absolutedevops.io 0845 874 0580 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/png Size: 4370 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ado_new.png Type: image/png Size: 4369 bytes Desc: not available URL: From serverascode at gmail.com Fri Oct 21 16:31:36 2016 From: serverascode at gmail.com (Curtis) Date: Fri, 21 Oct 2016 10:31:36 -0600 Subject: [Openstack-operators] OpenContrail Users at the BCN Summit? In-Reply-To: References: Message-ID: On Fri, Oct 21, 2016 at 9:43 AM, Edgar Magana wrote: > Curtis, > > I will be attending the summit. We have OpenContrail in production in multiple data centers. I am happy to share our experiences with you and your team. > The OpenContrail User Group is also a good forum to get in touch in other operators. This time I will be the host. So, it will be a bit different with a lot of more collaboration. > > https://www.eventbrite.com/e/opencontrail-user-group-meeting-barcelona-2016-tickets-27805549146 > > I am also moderating a networking Birds of a Feather session: > https://www.openstack.org/summit/barcelona-2016/summit-schedule/events/16801/openstack-networking-neutron-and-beyond-experiences-and-best-practices-bof Awesome, thanks Edgar (and Matt) that is exactly the information I'm looking for. Seems eventbrite might be getting dossed right at this moment though. I will check that link later and register. Thanks again, Curtis. > > Join us! > > Edgar > > On 10/20/16, 6:25 PM, "Curtis" wrote: > > Hi All, > > Are there any OpenContrail users that will be at the summit next week? > If so let me know because I'd like to talk to anyone who would be > willing to chat with me about it. :) I searched the schedule a bit but > didn't find anything obvious. Might have missed it. > > Thanks, > Curtis. > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DQICAg&c=DS6PUFBBr_KiLo7Sjt3ljp5jaW5k2i9ijVXllEdOozc&r=G0XRJfDQsuBvqa_wpWyDAUlSpeMV4W1qfWqBfctlWwQ&m=mTD3RgUjTn4QCcJpzFptvQ7W28GNEHSw-mSx21izqLI&s=jLAkZF7gA-QAfMfZ0q0jAs4dYfYJLZV90K3czuaPs-k&e= > > -- Blog: serverascode.com From danielle.m.mundle at gmail.com Fri Oct 21 16:38:32 2016 From: danielle.m.mundle at gmail.com (Danielle Mundle) Date: Fri, 21 Oct 2016 11:38:32 -0500 Subject: [Openstack-operators] [UX] Request for participation in OpenStack student research Message-ID: Howdy operators, In my free time, I am mentoring a group of CS students as they work on a UX research class project. They've chosen to build on the work Piet and I did a few months ago around information needs for operators ( https://youtu.be/dktorTIqU5s). Since a few of you provided excellent feedback on that topic before, I was wondering if you'd be willing to help the students out to meet their requirement of conducting a focus group for their own research. It's my understanding they are focusing on improvements in two of the areas Piet and I highlighted - error messages and log files. If you are interested, please contact Travis at travjones0 at gmail.com. He'll provide more information and work with you to coordinate a time in the next 2-3 weeks. And if you have any other contacts who would be interested in weighing in on error messages/logging in OpenStack, the more the merrier. Thanks for your help, --Danielle IRC: uxdanielle -------------- next part -------------- An HTML attachment was scrubbed... URL: From rico.l at inwinstack.com Sat Oct 22 04:33:38 2016 From: rico.l at inwinstack.com (Rico Lin) Date: Sat, 22 Oct 2016 12:33:38 +0800 Subject: [Openstack-operators] [tags]Summit session to discuss about maturity of heat Message-ID: Hi everyone, Heat will have a session in Summit to discuss what can we do as a team/community to help improve the maturity indicators ([1]) for heat? I would like to invite guys from OPs Tags Team to join this session to help to clarify and guides. *Friday October 28* - 11:50am-12:30pm - Improve maturity of heat - https://etherpad.openstack.org/p/heat-ocata-improve-maturity [1] https://www.openstack.org/software/project-navigator/ -- May The Force of OpenStack Be With You, *Rico Lin*irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From blair.bethwaite at monash.edu Sat Oct 22 14:38:53 2016 From: blair.bethwaite at monash.edu (Blair Bethwaite) Date: Sat, 22 Oct 2016 15:38:53 +0100 Subject: [Openstack-operators] [scientific] Barcelona Scientific BoF: Call for lightning talks In-Reply-To: <58E7BD20-D0D1-4480-8ED3-B2B1FC4E1397@telfer.org> References: <58E7BD20-D0D1-4480-8ED3-B2B1FC4E1397@telfer.org> Message-ID: Hi all, A reminder, we are still looking for people interested in giving lightning talks in the Scientific WG BoF session on Wednesday @2:15pm [1]. This is a great opportunity to quickly share highlights of what you have been or are working on with the community. [1] https://www.openstack.org/summit/barcelona-2016/summit-schedule/events/16779/scientific-working-group-bof-and-poster-session Cheers, Blair On 11 October 2016 at 23:36, Stig Telfer wrote: > Hello all - > > We have our schedule confirmed and will be having a BoF for Scientific OpenStack users at 2:15pm on the summit Wednesday: https://www.openstack.org/summit/barcelona-2016/summit-schedule/events/16779/scientific-working-group-bof-and-poster-session > > We are planning to run some lightning talks in this session, typically up to 5 minutes long. If you or your institution have been implementing some bright ideas that take OpenStack into new territory for research computing use cases, lets hear it! > > Please follow up to me and Blair (Scientific WG co-chairs) if you?re interested in speaking and would like to bag a slot. > > Best wishes, > Stig > -- Blair Bethwaite Senior HPC Consultant Monash eResearch Centre Monash University Room G26, 15 Innovation Walk, Clayton Campus Clayton VIC 3800 Australia Mobile: 0439-545-002 Office: +61 3-9903-2800 From jon at csail.mit.edu Sat Oct 22 16:13:59 2016 From: jon at csail.mit.edu (Jonathan D. Proulx) Date: Sat, 22 Oct 2016 12:13:59 -0400 Subject: [Openstack-operators] [tags]Summit session to discuss about maturity of heat In-Reply-To: References: Message-ID: <20161022161359.GA5875@csail.mit.edu> I'll try and make it. OpsTags a miniscule number of actuve participants (like maybe one and it's not me, but I have been more active in the past so might be of some use) so I'll conter propose attending: Ops: Ops Tags Team Tue 25 5:05pm-5:45pm AC Hotel - P3 - Montjuic https://etherpad.openstack.org/p/BCN-ops-tags-team Especially if you have any creative ideas on improving maturity indication. I think it's a pretty rough metric at this time, but not sure I have any great ideas about how to tune it. -Jon On Sat, Oct 22, 2016 at 12:33:38PM +0800, Rico Lin wrote: : Hi everyone, : Heat will have a session in Summit to discuss what can we do as a : team/community to help improve the maturity indicators ([1]) for heat? : I would like to invite guys from OPs Tags Team to join this session to : help to clarify and guides. : : Friday October 28 : * 11:50am-12:30pm - Improve maturity of heat : - [1]https://etherpad.openstack.org/p/heat-ocata-improve-maturity : : [1] [2]https://www.openstack.org/software/project-navigator/ : -- : May The Force of OpenStack Be With You, : Rico Lin : irc: ricolin : :References : : 1. https://etherpad.openstack.org/p/heat-ocata-improve-maturity : 2. https://www.openstack.org/software/project-navigator/ :_______________________________________________ :OpenStack-operators mailing list :OpenStack-operators at lists.openstack.org :http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators From pieter.kruithof.jr at intel.com Sun Oct 23 10:47:34 2016 From: pieter.kruithof.jr at intel.com (Kruithof Jr, Pieter) Date: Sun, 23 Oct 2016 10:47:34 +0000 Subject: [Openstack-operators] API Usability in Barcelona on Monday: Get a free bluetooth speaker and OpenStack t-shirt Message-ID: <0EB0992F-BA6E-4FCF-A53B-DD4078EB8A7B@intel.com> Apologies for the multiple postings. We?re getting close to running the usability studies and still need a few participants. OpenStack UX is running a series of 45 minute interviews on Monday, October 24th to investigate the current APIs and identify any specific pain points for operators. These studies are initiated by either the project teams or working groups to help understand their customers. You can schedule a time here: http://doodle.com/poll/fwfi2sfcuctxv3u8 Note that you may need to set the time zone in Doodle to Spain > Ceuta Intel is providing a Philips bluetooth speaker to show our appreciation for your time. In addition, the foundation is providing t-shirts to each persona that participate in the study. I may even give anyone suffering from jetlag a Red Bull to get them through the day. Thanks, Piet Kruithof PTL OpenStack UX -------------- next part -------------- An HTML attachment was scrubbed... URL: From pieter.kruithof.jr at intel.com Sun Oct 23 10:50:53 2016 From: pieter.kruithof.jr at intel.com (Kruithof Jr, Pieter) Date: Sun, 23 Oct 2016 10:50:53 +0000 Subject: [Openstack-operators] OpenStackClient Usability in Barcelona on Tuesday: Get a free bluetooth speaker and OpenStack t-shirt Message-ID: Apologies for the multiple postings. We?re getting close to running the usability studies and still need a few participants. OpenStack UX is running a series of 45 minute usability studies on Tuesday, October 25th to investigate the OpenStackClient and identify any specific pain points for operators. These studies are initiated by either the project teams or working groups to help understand their customers. You can schedule a time here: http://doodle.com/poll/894aqsmheaa2mv5a Note that you may need to set the time zone in Doodle to Spain > Ceuta Intel is providing a Philips bluetooth speaker to show our appreciation for your time. In addition, the foundation is providing t-shirts to each persona that participate in the study. I may even give anyone suffering from jetlag a Red Bull to get them through the day. Thanks, Piet Kruithof PTL OpenStack UX -------------- next part -------------- An HTML attachment was scrubbed... URL: From edgar.magana at workday.com Mon Oct 24 07:23:01 2016 From: edgar.magana at workday.com (Edgar Magana) Date: Mon, 24 Oct 2016 07:23:01 +0000 Subject: [Openstack-operators] [tags]Summit session to discuss about maturity of heat In-Reply-To: <20161022161359.GA5875@csail.mit.edu> References: <20161022161359.GA5875@csail.mit.edu> Message-ID: <9C0D3C50-C484-4199-95AC-68CB572DB719@workday.com> Hi Folks, I will try to make it to both sessions? Yes! May The Force of OpenStack Be With You, Edgar On 10/22/16, 5:13 PM, "Jonathan D. Proulx" wrote: I'll try and make it. OpsTags a miniscule number of actuve participants (like maybe one and it's not me, but I have been more active in the past so might be of some use) so I'll conter propose attending: Ops: Ops Tags Team Tue 25 5:05pm-5:45pm AC Hotel - P3 - Montjuic https://urldefense.proofpoint.com/v2/url?u=https-3A__etherpad.openstack.org_p_BCN-2Dops-2Dtags-2Dteam&d=DQICAg&c=DS6PUFBBr_KiLo7Sjt3ljp5jaW5k2i9ijVXllEdOozc&r=G0XRJfDQsuBvqa_wpWyDAUlSpeMV4W1qfWqBfctlWwQ&m=h_BTrUYXwdHRYIz3NIG8YG69jzOcNqtGKeSQ1vx1UxI&s=WIHROTGLasmma2ve6miEjU1Psr_5ywFxhOEdgfQlEaE&e= Especially if you have any creative ideas on improving maturity indication. I think it's a pretty rough metric at this time, but not sure I have any great ideas about how to tune it. -Jon On Sat, Oct 22, 2016 at 12:33:38PM +0800, Rico Lin wrote: : Hi everyone, : Heat will have a session in Summit to discuss what can we do as a : team/community to help improve the maturity indicators ([1]) for heat? : I would like to invite guys from OPs Tags Team to join this session to : help to clarify and guides. : : Friday October 28 : * 11:50am-12:30pm - Improve maturity of heat : - [1]https://urldefense.proofpoint.com/v2/url?u=https-3A__etherpad.openstack.org_p_heat-2Docata-2Dimprove-2Dmaturity&d=DQICAg&c=DS6PUFBBr_KiLo7Sjt3ljp5jaW5k2i9ijVXllEdOozc&r=G0XRJfDQsuBvqa_wpWyDAUlSpeMV4W1qfWqBfctlWwQ&m=h_BTrUYXwdHRYIz3NIG8YG69jzOcNqtGKeSQ1vx1UxI&s=rqLReKD8OALG_0BY2o0iISk1HAn7XZ5IbEpRI6jHQp0&e= : : [1] [2]https://urldefense.proofpoint.com/v2/url?u=https-3A__www.openstack.org_software_project-2Dnavigator_&d=DQICAg&c=DS6PUFBBr_KiLo7Sjt3ljp5jaW5k2i9ijVXllEdOozc&r=G0XRJfDQsuBvqa_wpWyDAUlSpeMV4W1qfWqBfctlWwQ&m=h_BTrUYXwdHRYIz3NIG8YG69jzOcNqtGKeSQ1vx1UxI&s=gLHjmC6fuywuviNrCCa-c0PtV-OE6RiiZ6__nxmt_3E&e= : -- : May The Force of OpenStack Be With You, : Rico Lin : irc: ricolin : :References : : 1. https://urldefense.proofpoint.com/v2/url?u=https-3A__etherpad.openstack.org_p_heat-2Docata-2Dimprove-2Dmaturity&d=DQICAg&c=DS6PUFBBr_KiLo7Sjt3ljp5jaW5k2i9ijVXllEdOozc&r=G0XRJfDQsuBvqa_wpWyDAUlSpeMV4W1qfWqBfctlWwQ&m=h_BTrUYXwdHRYIz3NIG8YG69jzOcNqtGKeSQ1vx1UxI&s=rqLReKD8OALG_0BY2o0iISk1HAn7XZ5IbEpRI6jHQp0&e= : 2. https://urldefense.proofpoint.com/v2/url?u=https-3A__www.openstack.org_software_project-2Dnavigator_&d=DQICAg&c=DS6PUFBBr_KiLo7Sjt3ljp5jaW5k2i9ijVXllEdOozc&r=G0XRJfDQsuBvqa_wpWyDAUlSpeMV4W1qfWqBfctlWwQ&m=h_BTrUYXwdHRYIz3NIG8YG69jzOcNqtGKeSQ1vx1UxI&s=gLHjmC6fuywuviNrCCa-c0PtV-OE6RiiZ6__nxmt_3E&e= :_______________________________________________ :OpenStack-operators mailing list :OpenStack-operators at lists.openstack.org :https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DQICAg&c=DS6PUFBBr_KiLo7Sjt3ljp5jaW5k2i9ijVXllEdOozc&r=G0XRJfDQsuBvqa_wpWyDAUlSpeMV4W1qfWqBfctlWwQ&m=h_BTrUYXwdHRYIz3NIG8YG69jzOcNqtGKeSQ1vx1UxI&s=I0JC0r8HntXcojNATQzcucTDt2CKhOuIVpMF6KS6zqc&e= _______________________________________________ OpenStack-operators mailing list OpenStack-operators at lists.openstack.org https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DQICAg&c=DS6PUFBBr_KiLo7Sjt3ljp5jaW5k2i9ijVXllEdOozc&r=G0XRJfDQsuBvqa_wpWyDAUlSpeMV4W1qfWqBfctlWwQ&m=h_BTrUYXwdHRYIz3NIG8YG69jzOcNqtGKeSQ1vx1UxI&s=I0JC0r8HntXcojNATQzcucTDt2CKhOuIVpMF6KS6zqc&e= From ben at apolloglobal.net Mon Oct 24 07:58:20 2016 From: ben at apolloglobal.net (Ben Lagunilla) Date: Mon, 24 Oct 2016 15:58:20 +0800 Subject: [Openstack-operators] openstack-ansible - recovery of failed infrastructure host Message-ID: Hi All, How do we go about restoring a failed infrastructure host? Setup is openstack mitaka with high availability configured on 3 infrastructure hosts. Thanks! From Jesse.Pretorius at rackspace.co.uk Mon Oct 24 10:06:38 2016 From: Jesse.Pretorius at rackspace.co.uk (Jesse Pretorius) Date: Mon, 24 Oct 2016 10:06:38 +0000 Subject: [Openstack-operators] openstack-ansible - recovery of failed infrastructure host In-Reply-To: References: Message-ID: <3068B021-C6F9-4734-9D33-84B9152550A7@rackspace.co.uk> Hi Ben, If you stand up a new infrastructure host with the same IP address and name in openstack_user_config then execute the setup-hosts, setup-infrastructure and setup-openstack plays it should put everything back in place for you without further intervention. Jesse IRC: odyssey4me On 10/24/16, 9:58 AM, "Ben Lagunilla" wrote: Hi All, How do we go about restoring a failed infrastructure host? Setup is openstack mitaka with high availability configured on 3 infrastructure hosts. Thanks! _______________________________________________ OpenStack-operators mailing list OpenStack-operators at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators ________________________________ Rackspace Limited is a company registered in England & Wales (company registered number 03897010) whose registered office is at 5 Millington Road, Hyde Park Hayes, Middlesex UB3 4AZ. Rackspace Limited privacy policy can be viewed at www.rackspace.co.uk/legal/privacy-policy - This e-mail message may contain confidential or privileged information intended for the recipient. Any dissemination, distribution or copying of the enclosed material is prohibited. If you receive this transmission in error, please notify us immediately by e-mail at abuse at rackspace.com and delete the original message. Your cooperation is appreciated. From mrhillsman at gmail.com Mon Oct 24 10:40:51 2016 From: mrhillsman at gmail.com (Melvin Hillsman) Date: Mon, 24 Oct 2016 12:40:51 +0200 Subject: [Openstack-operators] openstack-ansible - recovery of failed infrastructure host In-Reply-To: <3068B021-C6F9-4734-9D33-84B9152550A7@rackspace.co.uk> References: <3068B021-C6F9-4734-9D33-84B9152550A7@rackspace.co.uk> Message-ID: <392CF39D-07C4-4966-86FC-61A3FB275D20@gmail.com> Hey Ben, Unfortunately I do not have way to confirm with you right now but also suggest you create a backup of your database as a matter of caution and general best practice. If you can backup anything else that would be a good thing to consider as well. Kind regards, -- Melvin Hillsman Ops Technical Lead OpenStack Innovation Center mrhillsman at gmail.com phone: (210) 312-1267 mobile: (210) 413-1659 Learner | Ideation | Belief | Responsibility | Command http://osic.org On 10/24/16, 12:06 PM, "Jesse Pretorius" wrote: Hi Ben, If you stand up a new infrastructure host with the same IP address and name in openstack_user_config then execute the setup-hosts, setup-infrastructure and setup-openstack plays it should put everything back in place for you without further intervention. Jesse IRC: odyssey4me On 10/24/16, 9:58 AM, "Ben Lagunilla" wrote: Hi All, How do we go about restoring a failed infrastructure host? Setup is openstack mitaka with high availability configured on 3 infrastructure hosts. Thanks! _______________________________________________ OpenStack-operators mailing list OpenStack-operators at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators ________________________________ Rackspace Limited is a company registered in England & Wales (company registered number 03897010) whose registered office is at 5 Millington Road, Hyde Park Hayes, Middlesex UB3 4AZ. Rackspace Limited privacy policy can be viewed at www.rackspace.co.uk/legal/privacy-policy - This e-mail message may contain confidential or privileged information intended for the recipient. Any dissemination, distribution or copying of the enclosed material is prohibited. If you receive this transmission in error, please notify us immediately by e-mail at abuse at rackspace.com and delete the original message. Your cooperation is appreciated. _______________________________________________ OpenStack-operators mailing list OpenStack-operators at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators From ben at apolloglobal.net Tue Oct 25 07:14:15 2016 From: ben at apolloglobal.net (Ben Lagunilla) Date: Tue, 25 Oct 2016 15:14:15 +0800 Subject: [Openstack-operators] openstack-ansible - recovery of failed infrastructure host In-Reply-To: <392CF39D-07C4-4966-86FC-61A3FB275D20@gmail.com> References: <3068B021-C6F9-4734-9D33-84B9152550A7@rackspace.co.uk> <392CF39D-07C4-4966-86FC-61A3FB275D20@gmail.com> Message-ID: Thanks for the suggestions. We?ll run the openstack-ansible scripts on a new infrastructure host. -- ben > On 24 Oct 2016, at 6:40 PM, Melvin Hillsman wrote: > > Hey Ben, > > Unfortunately I do not have way to confirm with you right now but also suggest you create a backup of your database as a matter of caution and general best practice. If you can backup anything else that would be a good thing to consider as well. > > Kind regards, > -- > Melvin Hillsman > Ops Technical Lead > OpenStack Innovation Center > > mrhillsman at gmail.com > phone: (210) 312-1267 > mobile: (210) 413-1659 > Learner | Ideation | Belief | Responsibility | Command > http://osic.org > > > On 10/24/16, 12:06 PM, "Jesse Pretorius" wrote: > > Hi Ben, > > If you stand up a new infrastructure host with the same IP address and name in openstack_user_config then execute the setup-hosts, setup-infrastructure and setup-openstack plays it should put everything back in place for you without further intervention. > > Jesse > IRC: odyssey4me > > On 10/24/16, 9:58 AM, "Ben Lagunilla" wrote: > > Hi All, > > How do we go about restoring a failed infrastructure host? Setup is openstack mitaka with high availability configured on 3 infrastructure hosts. > > Thanks! > > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > > ________________________________ > Rackspace Limited is a company registered in England & Wales (company registered number 03897010) whose registered office is at 5 Millington Road, Hyde Park Hayes, Middlesex UB3 4AZ. Rackspace Limited privacy policy can be viewed at www.rackspace.co.uk/legal/privacy-policy - This e-mail message may contain confidential or privileged information intended for the recipient. Any dissemination, distribution or copying of the enclosed material is prohibited. If you receive this transmission in error, please notify us immediately by e-mail at abuse at rackspace.com and delete the original message. Your cooperation is appreciated. > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators From mvanwink at rackspace.com Tue Oct 25 07:24:42 2016 From: mvanwink at rackspace.com (Matt Van Winkle) Date: Tue, 25 Oct 2016 07:24:42 +0000 Subject: [Openstack-operators] [Large Deployments Team] [Summit] Reminder - planning session for the week Message-ID: Hello LDT folks at the summit! Just a reminder that we have a planning session from 11:25 to 12:05 [1] where we will figure out what sessions we would like to make sure team members attend and to determine what we want to accomplish this summit and the upcoming cycle. We?ll have another chance to meet up later in the week and discuss what we?ve learned/done so far [2]. See you there! Thanks! VW [1] https://www.openstack.org/summit/barcelona-2016/summit-schedule/events/16861/large-deployment-team-planning-the-week [2] https://www.openstack.org/summit/barcelona-2016/summit-schedule/events/16862/large-deployment-team-recapping-the-week -------------- next part -------------- An HTML attachment was scrubbed... URL: From tom at openstack.org Tue Oct 25 16:38:39 2016 From: tom at openstack.org (Tom Fifield) Date: Tue, 25 Oct 2016 18:38:39 +0200 Subject: [Openstack-operators] 2017 Openstack Operators Mid-Cycle Meetups - venue selection etherpads In-Reply-To: References: Message-ID: <8b7bf680-9e4a-3bf7-2670-c9db053597cb@openstack.org> Reminder! If you're interested in hosting the Feb/March Ops Meetup, get your proposal in by November 7th! Feel free to ask for help :) Regards, Tom On ???????? ? 11:51, Chris Morgan wrote: > Hello Everyone, > > Here are etherpads for the collection of venue hosting proposals and > assessment: > > https://etherpad.openstack.org/p/ops-meetup-venue-discuss-spring-2017 > https://etherpad.openstack.org/p/ops-meetup-venue-discuss-aug-2017 > > For your reference, the previous etherpad (for august 2016 was > eventually was decided to be in NYC) was : > > https://etherpad.openstack.org/p/ops-meetup-venue-discuss > > -- > Chris Morgan > From changzhi1990 at gmail.com Wed Oct 26 02:37:15 2016 From: changzhi1990 at gmail.com (zhi) Date: Wed, 26 Oct 2016 10:37:15 +0800 Subject: [Openstack-operators] [openstack-dev][tacker] Unexpected files are created after installing Tacker In-Reply-To: References: Message-ID: Sorry, after I delete the Tacker repo and clone the repo again from github, running " python setup.py install " again, everything goes ok. There is no unexpected files are generated anymore. Thanks all. ;-) 2016-10-20 18:45 GMT+08:00 zhi : > hi, tackers > > I have some questions about installing Tacker. > > I have an physical environment which installed Tacker a few days ago. > Today I have pulled the latest code. > Then I upgrade the database by using > " > tacker-db-manage --database-connection "mysql+pymysql://root:12345@ > 127.0.0.1/tacker?charset=utf8" upgrade head > " > At last I execute this command " python setup.py install ". I uploaded the > output at [1]. > > But I met some issues about the files which were generated. > > There is directory named " tacker/db/vm " in the path " > /usr/lib/python2.7/site-packages ". But this directory doesn't exist in > the source code. > > In the [1], I found this command "creating /usr/lib/python2.7/site- > packages/tacker/db/vm" . But the directory > " tacker/db/vm " doesn't exist anymore. Because this wrong directory and > files in this directory, tacker-server doesn't startup correctly. > > Could someone tell me the reason why this directory was generated? > > > Thanks > > [1]. http://paste.openstack.org/show/586528/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From william.josefson at gmail.com Wed Oct 26 12:07:24 2016 From: william.josefson at gmail.com (William Josefsson) Date: Wed, 26 Oct 2016 20:07:24 +0800 Subject: [Openstack-operators] Live-migration CPU doesn't have compatibility Message-ID: Hi list, I'm facing issues on Liberty/CentOS7 doing live migrations between to hosts. The hosts are Haswell and Broadwell. However, there is not feature specific running on my VMs Haswell -> Broadwell works Broadwell -> Haswell fails with the error below. I have on both hosts configured [libvirt] cpu_mode=none and restarted openstack-nova-compute on hosts, however that didn't help, with the same error. there gotta be a way of ignoring this check? pls advice. thx will 2016-10-26 19:36:29.025 1438627 INFO nova.virt.libvirt.driver [req-XXXX] Instance launched has CPU info: {"vendor": "Intel", "model": "Broadwell", "arch": "x86_64", "features": ["smap", "avx", "clflush", "sep", "rtm", "vme", "dtes64", "invpcid", "tsc", "fsgsbase", "xsave", "pge", "vmx", "erms", "xtpr", "cmov", "hle", "smep", "ssse3", "est", "pat", "monitor", "smx", "pbe", "lm", "msr", "adx", "3dnowprefetch", "nx", "fxsr", "syscall", "tm", "sse4.1", "pae", "sse4.2", "pclmuldq", "acpi", "fma", "tsc-deadline", "mmx", "osxsave", "cx8", "mce", "de", "tm2", "ht", "dca", "lahf_lm", "abm", "rdseed", "popcnt", "mca", "pdpe1gb", "apic", "sse", "f16c", "pse", "ds", "invtsc", "pni", "rdtscp", "avx2", "aes", "sse2", "ss", "ds_cpl", "bmi1", "bmi2", "pcid", "fpu", "cx16", "pse36", "mtrr", "movbe", "pdcm", "rdrand", "x2apic"], "topology": {"cores": 10, "cells": 2, "threads": 2, "sockets": 1}} 2016-10-26 19:36:29.028 1438627 ERROR nova.virt.libvirt.driver [req-XXXX] CPU doesn't have compatibility. 0 Refer to http://libvirt.org/html/libvirt-libvirt-host.html#virCPUCompareResult 2016-10-26 19:36:29.057 1438627 ERROR oslo_messaging.rpc.dispatcher [req-XXXX] Exception during message handling: Unacceptable CPU info: CPU doesn't have compatibility. From william.josefson at gmail.com Wed Oct 26 12:16:54 2016 From: william.josefson at gmail.com (William Josefsson) Date: Wed, 26 Oct 2016 20:16:54 +0800 Subject: [Openstack-operators] Live-migrations stuck in MIGRATING state Message-ID: Hi, Some of my VMs fail when I issue, nova live-migration while others work. The pattern seem to be common to instances that are running more intensive workloads such as database MQ etc. if I try with an idle small instance I don't see these issues. The VM get's stuck in MIGRATING state, and I cannot do anything, When I issue virsh list on the source and designation host, I can in fact see that the VM is already running on the destination, but the DB in nova show still shows the old host, and the state is stuck in MIGRATING. I also notice this in my logs as if nova database and hypervisors got out of sync about what is running where, 2016-10-26 19:34:42.947 1438627 WARNING nova.compute.manager [req-c321c991-7c58-4b78-924e-773678d27b1a - - - - -] While synchronizing instance power states, found 2 instances in the database and 3 instances on the hypervisor. pls advice on how to get this resolved. is there a safer way of migrating the instances, should I try to pause the cpu/memory intensive instances, live-migrate and than un-pause them when completed on the destination? thx will From mihalis68 at gmail.com Wed Oct 26 13:31:04 2016 From: mihalis68 at gmail.com (Chris Morgan) Date: Wed, 26 Oct 2016 15:31:04 +0200 Subject: [Openstack-operators] Two proposals up for early 2017 OpenStack Operator's Mid-Cycle Meetup! Message-ID: Hello Everyone, I am very happy to see and let you know that there are now two proposals up for the early 2017 meetup, please see https://etherpad.openstack.org/p/ops-meetup-venue-discuss-spring-2017 Scroll past the "boilerplate" text to see one for Tokyo and also one for Milan. The Meetups team will next meet online on November 7th on IRC as per usual (e.g. 10am EST) and will be making arrangements to get the community's feedback on both of these. Cheers Chris -- Chris Morgan -------------- next part -------------- An HTML attachment was scrubbed... URL: From mnaser at vexxhost.com Wed Oct 26 15:45:33 2016 From: mnaser at vexxhost.com (Mohammed Naser) Date: Wed, 26 Oct 2016 11:45:33 -0400 Subject: [Openstack-operators] Live-migration CPU doesn't have compatibility In-Reply-To: References: Message-ID: The VMs have to be restarted so that the libvirt config is updated with the new CPU model. Good luck! On Wed, Oct 26, 2016 at 8:07 AM, William Josefsson wrote: > Hi list, > > I'm facing issues on Liberty/CentOS7 doing live migrations between to > hosts. The hosts are Haswell and Broadwell. However, there is not > feature specific running on my VMs > > Haswell -> Broadwell works > Broadwell -> Haswell fails with the error below. > > > I have on both hosts configured > [libvirt] > cpu_mode=none > > and restarted openstack-nova-compute on hosts, however that didn't > help, with the same error. there gotta be a way of ignoring this > check? pls advice. thx will > > > > 2016-10-26 19:36:29.025 1438627 INFO nova.virt.libvirt.driver > [req-XXXX] Instance launched has CPU info: {"vendor": "Intel", > "model": "Broadwell", "arch": "x86_64", "features": ["smap", "avx", > "clflush", "sep", "rtm", "vme", "dtes64", "invpcid", "tsc", > "fsgsbase", "xsave", "pge", "vmx", "erms", "xtpr", "cmov", "hle", > "smep", "ssse3", "est", "pat", "monitor", "smx", "pbe", "lm", "msr", > "adx", "3dnowprefetch", "nx", "fxsr", "syscall", "tm", "sse4.1", > "pae", "sse4.2", "pclmuldq", "acpi", "fma", "tsc-deadline", "mmx", > "osxsave", "cx8", "mce", "de", "tm2", "ht", "dca", "lahf_lm", "abm", > "rdseed", "popcnt", "mca", "pdpe1gb", "apic", "sse", "f16c", "pse", > "ds", "invtsc", "pni", "rdtscp", "avx2", "aes", "sse2", "ss", > "ds_cpl", "bmi1", "bmi2", "pcid", "fpu", "cx16", "pse36", "mtrr", > "movbe", "pdcm", "rdrand", "x2apic"], "topology": {"cores": 10, > "cells": 2, "threads": 2, "sockets": 1}} > 2016-10-26 19:36:29.028 1438627 ERROR nova.virt.libvirt.driver > [req-XXXX] CPU doesn't have compatibility. > > 0 > > Refer to http://libvirt.org/html/libvirt-libvirt-host.html#virCPUCompareResult > 2016-10-26 19:36:29.057 1438627 ERROR oslo_messaging.rpc.dispatcher > [req-XXXX] Exception during message handling: Unacceptable CPU info: > CPU doesn't have compatibility. > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators -- Mohammed Naser ? vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From william.josefson at gmail.com Wed Oct 26 15:57:20 2016 From: william.josefson at gmail.com (William Josefsson) Date: Wed, 26 Oct 2016 23:57:20 +0800 Subject: [Openstack-operators] Live-migration CPU doesn't have compatibility In-Reply-To: References: Message-ID: Thank you Mohammed. So I just set cpu_mode=none in libvirt section on both source and destination hosts, restart nova-compute, restart VMs on source host, and finally do the live migration? pls let me know if this is wrong. thx will On Wed, Oct 26, 2016 at 11:45 PM, Mohammed Naser wrote: > The VMs have to be restarted so that the libvirt config is updated > with the new CPU model. > > Good luck! > > On Wed, Oct 26, 2016 at 8:07 AM, William Josefsson > wrote: >> Hi list, >> >> I'm facing issues on Liberty/CentOS7 doing live migrations between to >> hosts. The hosts are Haswell and Broadwell. However, there is not >> feature specific running on my VMs >> >> Haswell -> Broadwell works >> Broadwell -> Haswell fails with the error below. >> >> >> I have on both hosts configured >> [libvirt] >> cpu_mode=none >> >> and restarted openstack-nova-compute on hosts, however that didn't >> help, with the same error. there gotta be a way of ignoring this >> check? pls advice. thx will >> >> >> >> 2016-10-26 19:36:29.025 1438627 INFO nova.virt.libvirt.driver >> [req-XXXX] Instance launched has CPU info: {"vendor": "Intel", >> "model": "Broadwell", "arch": "x86_64", "features": ["smap", "avx", >> "clflush", "sep", "rtm", "vme", "dtes64", "invpcid", "tsc", >> "fsgsbase", "xsave", "pge", "vmx", "erms", "xtpr", "cmov", "hle", >> "smep", "ssse3", "est", "pat", "monitor", "smx", "pbe", "lm", "msr", >> "adx", "3dnowprefetch", "nx", "fxsr", "syscall", "tm", "sse4.1", >> "pae", "sse4.2", "pclmuldq", "acpi", "fma", "tsc-deadline", "mmx", >> "osxsave", "cx8", "mce", "de", "tm2", "ht", "dca", "lahf_lm", "abm", >> "rdseed", "popcnt", "mca", "pdpe1gb", "apic", "sse", "f16c", "pse", >> "ds", "invtsc", "pni", "rdtscp", "avx2", "aes", "sse2", "ss", >> "ds_cpl", "bmi1", "bmi2", "pcid", "fpu", "cx16", "pse36", "mtrr", >> "movbe", "pdcm", "rdrand", "x2apic"], "topology": {"cores": 10, >> "cells": 2, "threads": 2, "sockets": 1}} >> 2016-10-26 19:36:29.028 1438627 ERROR nova.virt.libvirt.driver >> [req-XXXX] CPU doesn't have compatibility. >> >> 0 >> >> Refer to http://libvirt.org/html/libvirt-libvirt-host.html#virCPUCompareResult >> 2016-10-26 19:36:29.057 1438627 ERROR oslo_messaging.rpc.dispatcher >> [req-XXXX] Exception during message handling: Unacceptable CPU info: >> CPU doesn't have compatibility. >> >> _______________________________________________ >> OpenStack-operators mailing list >> OpenStack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > > -- > Mohammed Naser ? vexxhost > ----------------------------------------------------- > D. 514-316-8872 > D. 800-910-1726 ext. 200 > E. mnaser at vexxhost.com > W. http://vexxhost.com From joehuang at huawei.com Wed Oct 26 20:22:14 2016 From: joehuang at huawei.com (joehuang) Date: Wed, 26 Oct 2016 20:22:14 +0000 Subject: [Openstack-operators] [openstack-operators]Control Plane Design (multi-region) Message-ID: <5E7A3D1BF5FD014E86E5F971CF446EFF559C127C@szxema505-mbx.china.huawei.com> Hello, I just found the link in distributed massively etherpad https://etherpad.openstack.org/p/BCN-ops-control-plane-design. And learn that lots of operators are also discussing the design in multi-region deployment. As Tricircle was mentioned in the discussion, and there are two design summit sessions in Thursday after noon and Friday morning, please join the discussion if you are interested in Tricircle current status and what's to do for next release. You can find sessions in following link: https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=Tricircle%3A Tricircle has just finished its splitting, now is a project dedicated for networking automation across Neutron for in multi-region OpenStack clouds. Best Regards Chaoyi Huang (joehuang) -------------- next part -------------- An HTML attachment was scrubbed... URL: From omgjlk at us.ibm.com Thu Oct 27 09:01:52 2016 From: omgjlk at us.ibm.com (Jesse Keating) Date: Thu, 27 Oct 2016 09:01:52 +0000 Subject: [Openstack-operators] 2017 Openstack Operators Mid-Cycle Meetups - venue selection etherpads In-Reply-To: <8b7bf680-9e4a-3bf7-2670-c9db053597cb@openstack.org> References: <8b7bf680-9e4a-3bf7-2670-c9db053597cb@openstack.org>, Message-ID: An HTML attachment was scrubbed... URL: From emccormick at cirrusseven.com Thu Oct 27 09:08:24 2016 From: emccormick at cirrusseven.com (Erik McCormick) Date: Thu, 27 Oct 2016 11:08:24 +0200 Subject: [Openstack-operators] 2017 Openstack Operators Mid-Cycle Meetups - venue selection etherpads In-Reply-To: References: <8b7bf680-9e4a-3bf7-2670-c9db053597cb@openstack.org> Message-ID: The PTG is for devs to get together and get real work done. We would be a distraction from that goal. They will also be attending the forum which will run with the summits and will be able to spend more time in groups with ops for requirements gathering and such. -Erik On Oct 27, 2016 11:05 AM, "Jesse Keating" wrote: > I may have missed something, but why aren't we meeting at the Project > Technical Gathering, which is at the end of February in Atlanta? > > I understand that this mid-cycle is targeting EU, which is totally > awesome; and if that happens, will there also be operator focused sessions > and such at the PTG? > -jlk > > > > ----- Original message ----- > From: Tom Fifield > To: Chris Morgan , OpenStack Operators < > OpenStack-operators at lists.openstack.org> > Cc: > Subject: Re: [Openstack-operators] 2017 Openstack Operators Mid-Cycle > Meetups - venue selection etherpads > Date: Tue, Oct 25, 2016 6:47 PM > > Reminder! > > If you're interested in hosting the Feb/March Ops Meetup, get your > proposal in by November 7th! Feel free to ask for help :) > > > Regards, > > > > Tom > > On ???????? ? 11:51, Chris Morgan wrote: > > Hello Everyone, > > > > Here are etherpads for the collection of venue hosting proposals and > > assessment: > > > > https://etherpad.openstack.org/p/ops-meetup-venue-discuss-spring-2017 > > https://etherpad.openstack.org/p/ops-meetup-venue-discuss-aug-2017 > > > > For your reference, the previous etherpad (for august 2016 was > > eventually was decided to be in NYC) was : > > > > https://etherpad.openstack.org/p/ops-meetup-venue-discuss > > > > -- > > Chris Morgan >> > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.friesen at windriver.com Thu Oct 27 09:20:07 2016 From: chris.friesen at windriver.com (Chris Friesen) Date: Thu, 27 Oct 2016 03:20:07 -0600 Subject: [Openstack-operators] Live-migration CPU doesn't have compatibility In-Reply-To: References: Message-ID: <5811C6C7.90207@windriver.com> On 10/26/2016 06:07 AM, William Josefsson wrote: > Hi list, > > I'm facing issues on Liberty/CentOS7 doing live migrations between to > hosts. The hosts are Haswell and Broadwell. However, there is not > feature specific running on my VMs > > Haswell -> Broadwell works > Broadwell -> Haswell fails with the error below. > > > I have on both hosts configured > [libvirt] > cpu_mode=none > > and restarted openstack-nova-compute on hosts, however that didn't > help, with the same error. there gotta be a way of ignoring this > check? pls advice. thx will If you are using kvm/qemu and set cpu_mode=none, then it will use 'host-model', and any instances started on Broadwell can't be live-migrated onto Haswell. In your case you probably want to set both computes to have: [libvirt] cpu_mode = custom cpu_model = Haswell This will cause nova to start guests with the "Haswell" model on both nodes. Chris From jon at csail.mit.edu Thu Oct 27 11:52:30 2016 From: jon at csail.mit.edu (Jon Proulx) Date: Thu, 27 Oct 2016 13:52:30 +0200 Subject: [Openstack-operators] 2017 Openstack Operators Mid-Cycle Meetups - venue selection etherpads In-Reply-To: References: <8b7bf680-9e4a-3bf7-2670-c9db053597cb@openstack.org> Message-ID: Being a 'distraction' isn't exactly the issue. The ptg/ops-midcycle events are focused inward so where the 'forum' (summit-thing) as focused across. The hope is this make us both more effective in both directions. So there's no need to collocate. Additionally collocation would require bigger, more expensive, more difficult to site venues. Also agreed location is 'not north america'. There's APAC and EU proposals. Since it's sandwiched between EU summit and NA summit I'd advocate an APAC midcyle to spread the love, but that's IMHO... On October 27, 2016 11:08:24 AM GMT+02:00, Erik McCormick wrote: >The PTG is for devs to get together and get real work done. We would be >a >distraction from that goal. They will also be attending the forum which >will run with the summits and will be able to spend more time in groups >with ops for requirements gathering and such. > >-Erik > >On Oct 27, 2016 11:05 AM, "Jesse Keating" wrote: > >> I may have missed something, but why aren't we meeting at the Project >> Technical Gathering, which is at the end of February in Atlanta? >> >> I understand that this mid-cycle is targeting EU, which is totally >> awesome; and if that happens, will there also be operator focused >sessions >> and such at the PTG? >> -jlk >> >> >> >> ----- Original message ----- >> From: Tom Fifield >> To: Chris Morgan , OpenStack Operators < >> OpenStack-operators at lists.openstack.org> >> Cc: >> Subject: Re: [Openstack-operators] 2017 Openstack Operators Mid-Cycle >> Meetups - venue selection etherpads >> Date: Tue, Oct 25, 2016 6:47 PM >> >> Reminder! >> >> If you're interested in hosting the Feb/March Ops Meetup, get your >> proposal in by November 7th! Feel free to ask for help :) >> >> >> Regards, >> >> >> >> Tom >> >> On ???????? ? 11:51, Chris Morgan wrote: >> > Hello Everyone, >> > >> > Here are etherpads for the collection of venue hosting proposals >and >> > assessment: >> > >> > >https://etherpad.openstack.org/p/ops-meetup-venue-discuss-spring-2017 >> > https://etherpad.openstack.org/p/ops-meetup-venue-discuss-aug-2017 >> > >> > For your reference, the previous etherpad (for august 2016 was >> > eventually was decided to be in NYC) was : >> > >> > https://etherpad.openstack.org/p/ops-meetup-venue-discuss >> > >> > -- >> > Chris Morgan > >> >> >> _______________________________________________ >> OpenStack-operators mailing list >> OpenStack-operators at lists.openstack.org >> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> >> >> >> >> _______________________________________________ >> OpenStack-operators mailing list >> OpenStack-operators at lists.openstack.org >> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> >> > > >------------------------------------------------------------------------ > >_______________________________________________ >OpenStack-operators mailing list >OpenStack-operators at lists.openstack.org >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators -- Sent from my Android device with K-9 Mail. Please excuse my brevity. From blair.bethwaite at gmail.com Thu Oct 27 12:27:48 2016 From: blair.bethwaite at gmail.com (Blair Bethwaite) Date: Thu, 27 Oct 2016 14:27:48 +0200 Subject: [Openstack-operators] Disable console for an instance In-Reply-To: References: Message-ID: Looks like this is not currently possible. Does anyone else have an interest in such a feature? I'm thinking about it from the perspective of a public cloud user who wants to build highly secure / sandboxed instances. Having a virtual terminal straight into a guest login prompt, especially one that allows reset of the guest, is not desirable. On 13 October 2016 at 04:37, Blair Bethwaite wrote: > Hi all, > > Does anyone know whether there is a way to disable the novnc console on a > per instance basis? > > Cheers, > Blair > -- Cheers, ~Blairo -------------- next part -------------- An HTML attachment was scrubbed... URL: From mrunge at redhat.com Thu Oct 27 12:31:00 2016 From: mrunge at redhat.com (Matthias Runge) Date: Thu, 27 Oct 2016 14:31:00 +0200 Subject: [Openstack-operators] 2017 Openstack Operators Mid-Cycle Meetups - venue selection etherpads In-Reply-To: References: <8b7bf680-9e4a-3bf7-2670-c9db053597cb@openstack.org> Message-ID: <9be7d001-d464-74f7-afdb-38aa277032dc@redhat.com> On 27/10/16 11:08, Erik McCormick wrote: > The PTG is for devs to get together and get real work done. We would be > a distraction from that goal. They will also be attending the forum > which will run with the summits and will be able to spend more time in > groups with ops for requirements gathering and such. > > -Erik > > > On Oct 27, 2016 11:05 AM, "Jesse Keating" > wrote: > > I may have missed something, but why aren't we meeting at the > Project Technical Gathering, which is at the end of February in Atlanta? >From my experience with OpenStack, feedback from operators have been invaluable. You can easily run things in devstack (or all-in-one deployments), but this is completely different from running in scale. Operators do tell you, were the pain-points are. Having a dedicated gathering without involving actual operators/users is not that useful IMO. Matthias -- Matthias Runge Red Hat GmbH, http://www.de.redhat.com/, Registered seat: Grasbrunn, Commercial register: Amtsgericht Muenchen, HRB 153243, Managing Directors: Charles Cachera, Michael Cunningham, Michael O'Neill, Eric Shander From jon at csail.mit.edu Thu Oct 27 14:02:30 2016 From: jon at csail.mit.edu (Jonathan D. Proulx) Date: Thu, 27 Oct 2016 16:02:30 +0200 Subject: [Openstack-operators] Disable console for an instance In-Reply-To: References: Message-ID: <20161027140230.GA19262@csail.mit.edu> On Thu, Oct 27, 2016 at 02:27:48PM +0200, Blair Bethwaite wrote: : Looks like this is not currently possible. Does anyone else have an : interest in such a feature? : I'm thinking about it from the perspective of a public cloud user who : wants to build highly secure / sandboxed instances. Having a virtual : terminal straight into a guest login prompt, especially one that allows : reset of the guest, is not desirable. don't put a getty on the TTY :) Of course there's still race conditions where you could get to boot loader or something. Snarkless answer: I can imagine a use case for wanting to toggle this on a per VM basis but don't actually have one myself. -Jon : : On 13 October 2016 at 04:37, Blair Bethwaite : <[1]blair.bethwaite at gmail.com> wrote: : : Hi all, : : Does anyone know whether there is a way to disable the novnc console : on a per instance basis? : : Cheers, : Blair : : -- : Cheers, : ~Blairo : :References : : 1. mailto:blair.bethwaite at gmail.com :_______________________________________________ :OpenStack-operators mailing list :OpenStack-operators at lists.openstack.org :http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators From blair.bethwaite at gmail.com Thu Oct 27 14:08:26 2016 From: blair.bethwaite at gmail.com (Blair Bethwaite) Date: Thu, 27 Oct 2016 16:08:26 +0200 Subject: [Openstack-operators] Disable console for an instance In-Reply-To: <20161027140230.GA19262@csail.mit.edu> References: <20161027140230.GA19262@csail.mit.edu> Message-ID: On 27 October 2016 at 16:02, Jonathan D. Proulx wrote: > don't put a getty on the TTY :) Do you know how to do that with Windows? ...you can see the desire for sandboxing now :-). -- Cheers, ~Blairo From jon at csail.mit.edu Thu Oct 27 14:11:32 2016 From: jon at csail.mit.edu (Jonathan D. Proulx) Date: Thu, 27 Oct 2016 16:11:32 +0200 Subject: [Openstack-operators] Disable console for an instance In-Reply-To: References: <20161027140230.GA19262@csail.mit.edu> Message-ID: <20161027141132.GB19262@csail.mit.edu> On Thu, Oct 27, 2016 at 04:08:26PM +0200, Blair Bethwaite wrote: :On 27 October 2016 at 16:02, Jonathan D. Proulx wrote: :> don't put a getty on the TTY :) : :Do you know how to do that with Windows? ...you can see the desire for :sandboxing now :-). Sigh yes I see, http://goodbye-microsoft.com/ has a good solution IMHO :-- :Cheers, :~Blairo From lmihaiescu at gmail.com Thu Oct 27 14:15:46 2016 From: lmihaiescu at gmail.com (George Mihaiescu) Date: Thu, 27 Oct 2016 10:15:46 -0400 Subject: [Openstack-operators] Disable console for an instance In-Reply-To: References: <20161027140230.GA19262@csail.mit.edu> Message-ID: Hi Blair, Did you try playing with Nova's policy file and limit the scope for "compute_extension:console_output": "" ? Cheers, George On Thu, Oct 27, 2016 at 10:08 AM, Blair Bethwaite wrote: > On 27 October 2016 at 16:02, Jonathan D. Proulx wrote: > > don't put a getty on the TTY :) > > Do you know how to do that with Windows? ...you can see the desire for > sandboxing now :-). > > -- > Cheers, > ~Blairo > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -------------- next part -------------- An HTML attachment was scrubbed... URL: From blair.bethwaite at gmail.com Thu Oct 27 14:18:16 2016 From: blair.bethwaite at gmail.com (Blair Bethwaite) Date: Thu, 27 Oct 2016 16:18:16 +0200 Subject: [Openstack-operators] Disable console for an instance In-Reply-To: <20161027141132.GB19262@csail.mit.edu> References: <20161027140230.GA19262@csail.mit.edu> <20161027141132.GB19262@csail.mit.edu> Message-ID: Lol! I don't mind - Microsoft do support and produce some pretty good research, I just wish they'd fix licensing! On 27 October 2016 at 16:11, Jonathan D. Proulx wrote: > On Thu, Oct 27, 2016 at 04:08:26PM +0200, Blair Bethwaite wrote: > :On 27 October 2016 at 16:02, Jonathan D. Proulx wrote: > :> don't put a getty on the TTY :) > : > :Do you know how to do that with Windows? ...you can see the desire for > :sandboxing now :-). > > Sigh yes I see, http://goodbye-microsoft.com/ has a good solution IMHO > > :-- > :Cheers, > :~Blairo -- Cheers, ~Blairo From blair.bethwaite at gmail.com Thu Oct 27 14:24:48 2016 From: blair.bethwaite at gmail.com (Blair Bethwaite) Date: Thu, 27 Oct 2016 16:24:48 +0200 Subject: [Openstack-operators] Disable console for an instance In-Reply-To: References: <20161027140230.GA19262@csail.mit.edu> Message-ID: Hi George, On 27 October 2016 at 16:15, George Mihaiescu wrote: > Did you try playing with Nova's policy file and limit the scope for > "compute_extension:console_output": "" ? No, interesting idea though... I suspect it's actually the get_*_console policies we'd need to tweak, I think console_output probably refers to the console log? Anyway, not quite sure how we'd craft policy that would enable us to disable these on a per instance basis though - is it possible to reference image metadata in the context of the policy rule? -- Cheers, ~Blairo From lmihaiescu at gmail.com Thu Oct 27 14:53:15 2016 From: lmihaiescu at gmail.com (George Mihaiescu) Date: Thu, 27 Oct 2016 10:53:15 -0400 Subject: [Openstack-operators] Disable console for an instance In-Reply-To: References: <20161027140230.GA19262@csail.mit.edu> Message-ID: You're right, it's probably the following you would want changed: "compute:get_vnc_console": "", "compute:get_spice_console": "", "compute:get_rdp_console": "", "compute:get_serial_console": "", "compute:get_mks_console": "", "compute:get_console_output": "", I thought the use case is to limit console access to users in a shared project environment, where you might have multiple users seeing each other instances, and you don't want them to try logging on the console. You could create a special role that has console access and change the policy file to reference that role for the "compute:get_vnc_console", for example. I don't think you can do it on per-flavor basis. Cheers, George On Thu, Oct 27, 2016 at 10:24 AM, Blair Bethwaite wrote: > Hi George, > > On 27 October 2016 at 16:15, George Mihaiescu > wrote: > > Did you try playing with Nova's policy file and limit the scope for > > "compute_extension:console_output": "" ? > > No, interesting idea though... I suspect it's actually the > get_*_console policies we'd need to tweak, I think console_output > probably refers to the console log? Anyway, not quite sure how we'd > craft policy that would enable us to disable these on a per instance > basis though - is it possible to reference image metadata in the > context of the policy rule? > > -- > Cheers, > ~Blairo > -------------- next part -------------- An HTML attachment was scrubbed... URL: From william.josefson at gmail.com Thu Oct 27 15:31:16 2016 From: william.josefson at gmail.com (William Josefsson) Date: Thu, 27 Oct 2016 23:31:16 +0800 Subject: [Openstack-operators] Live-migration CPU doesn't have compatibility In-Reply-To: <5811C6C7.90207@windriver.com> References: <5811C6C7.90207@windriver.com> Message-ID: On Thu, Oct 27, 2016 at 5:20 PM, Chris Friesen wrote: > In your case you probably want to set both computes to have: > > [libvirt] > cpu_mode = custom > cpu_model = Haswell > Hi Chris, thanks! Yerps, I finally got it working. However, I set cpu_model=kvm64 everywhere and it seems to work. It is listed here: https://wiki.openstack.org/wiki/LibvirtXMLCPUModel hopefully 'kvm64' has no performance impact what cpu_model is set to, or would 'kvm64' as model negatively affect my VMs? thx will From mnaser at vexxhost.com Thu Oct 27 18:39:30 2016 From: mnaser at vexxhost.com (Mohammed Naser) Date: Thu, 27 Oct 2016 14:39:30 -0400 Subject: [Openstack-operators] Live-migration CPU doesn't have compatibility In-Reply-To: References: <5811C6C7.90207@windriver.com> Message-ID: Depending on your workload, it will. If they depend on any custom CPU extensions, they will miss out on them and performance will be decreased. My personal suggestion is to read the docs for it and use the "smallest common denominator" in terms of CPU usage. On Thu, Oct 27, 2016 at 11:31 AM, William Josefsson wrote: > On Thu, Oct 27, 2016 at 5:20 PM, Chris Friesen > wrote: >> In your case you probably want to set both computes to have: >> >> [libvirt] >> cpu_mode = custom >> cpu_model = Haswell >> > > Hi Chris, thanks! Yerps, I finally got it working. However, I set > cpu_model=kvm64 everywhere and it seems to work. It is listed here: > https://wiki.openstack.org/wiki/LibvirtXMLCPUModel hopefully 'kvm64' > has no performance impact what cpu_model is set to, or would 'kvm64' > as model negatively affect my VMs? thx will > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators -- Mohammed Naser ? vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From william.josefson at gmail.com Fri Oct 28 05:09:52 2016 From: william.josefson at gmail.com (William Josefsson) Date: Fri, 28 Oct 2016 13:09:52 +0800 Subject: [Openstack-operators] Live-migration CPU doesn't have compatibility In-Reply-To: References: <5811C6C7.90207@windriver.com> Message-ID: hi, I did 'virsh capabilities' on the Haswell, which turned out to list model: Haswell-noTSX. So I set in nova.conf cpu_model=Haswell-noTSX on both Haswell and Broadwell hosts and it seems to work. I believe this is my smallest common denominator. thx will On Fri, Oct 28, 2016 at 2:39 AM, Mohammed Naser wrote: > Depending on your workload, it will. If they depend on any custom CPU > extensions, they will miss out on them and performance will be > decreased. My personal suggestion is to read the docs for it and use > the "smallest common denominator" in terms of CPU usage. > > On Thu, Oct 27, 2016 at 11:31 AM, William Josefsson > wrote: >> On Thu, Oct 27, 2016 at 5:20 PM, Chris Friesen >> wrote: >>> In your case you probably want to set both computes to have: >>> >>> [libvirt] >>> cpu_mode = custom >>> cpu_model = Haswell >>> >> >> Hi Chris, thanks! Yerps, I finally got it working. However, I set >> cpu_model=kvm64 everywhere and it seems to work. It is listed here: >> https://wiki.openstack.org/wiki/LibvirtXMLCPUModel hopefully 'kvm64' >> has no performance impact what cpu_model is set to, or would 'kvm64' >> as model negatively affect my VMs? thx will >> >> _______________________________________________ >> OpenStack-operators mailing list >> OpenStack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > > -- > Mohammed Naser ? vexxhost > ----------------------------------------------------- > D. 514-316-8872 > D. 800-910-1726 ext. 200 > E. mnaser at vexxhost.com > W. http://vexxhost.com From jon at csail.mit.edu Fri Oct 28 09:08:36 2016 From: jon at csail.mit.edu (Jonathan D. Proulx) Date: Fri, 28 Oct 2016 11:08:36 +0200 Subject: [Openstack-operators] Disable console for an instance In-Reply-To: References: <20161027140230.GA19262@csail.mit.edu> Message-ID: <20161028090836.GA25366@csail.mit.edu> That is an interesting angle. There *should* be a way to limit vnc acces to just the owner via RBAC. If you trust everything else to be setup right that's probbaly sufficient. Putting on my paranoid security hat, I wouldn't trust that. VNC access at least is completely unsecured at the hypervisor side. Of course we have measures in place to prevent anyone directly accessing that (iptable srules on all hypervisors in my case that get checked every 30min by config management). Mistakes happen and if I had hard security needs for a VM I'd want to be sure I had control of that console not rely on my provider (even if I'm my own providerer honestly), so I think there's still value in putting a feature in Nova for this. -Jon On Thu, Oct 27, 2016 at 10:53:15AM -0400, George Mihaiescu wrote: : You're right, it's probably the following you would want changed: : "compute:get_vnc_console": "", : "compute:get_spice_console": "", : "compute:get_rdp_console": "", : "compute:get_serial_console": "", : "compute:get_mks_console": "", : "compute:get_console_output": "", : I thought the use case is to limit console access to users in a shared : project environment, where you might have multiple users seeing each : other instances, and you don't want them to try logging on the console. : You could create a special role that has console access and change the : policy file to reference that role for the "compute:get_vnc_console", : for example. : I don't think you can do it on per-flavor basis. : Cheers, : George : : On Thu, Oct 27, 2016 at 10:24 AM, Blair Bethwaite : <[1]blair.bethwaite at gmail.com> wrote: : : Hi George, : On 27 October 2016 at 16:15, George Mihaiescu : <[2]lmihaiescu at gmail.com> wrote: : > Did you try playing with Nova's policy file and limit the scope : for : > "compute_extension:console_output": "" ? : No, interesting idea though... I suspect it's actually the : get_*_console policies we'd need to tweak, I think console_output : probably refers to the console log? Anyway, not quite sure how we'd : craft policy that would enable us to disable these on a per instance : basis though - is it possible to reference image metadata in the : context of the policy rule? : -- : Cheers, : ~Blairo : :References : : 1. mailto:blair.bethwaite at gmail.com : 2. mailto:lmihaiescu at gmail.com From ahmedmostafadev at gmail.com Mon Oct 31 09:34:02 2016 From: ahmedmostafadev at gmail.com (Ahmed Mostafa) Date: Mon, 31 Oct 2016 10:34:02 +0100 Subject: [Openstack-operators] Neutron Nova Notification and Virtual Interface plugging events Message-ID: Hello all, I am a bit confuse by the following parameters : notify_nova_on_port_status_change notify_nova_on_port_data_change The reason i am confused is that i see if there either of these values is set to true (Which is the default), neutron will create a notiva notifier and send these events to nova. But i can not understand that part from the nova side, because as i see, when you create a virtual machine you basically call _do_build_and_run_instance, which calls _create_domain_and_network These two methods create a virtual machine and a port, then attach that port on the virtual machine but in _create_domain_and_network, is see nova waiting for neutron to create the port and it has a neutron_callbak_fail, if neutron should fail in creating the port. Now, if i set these values to false, instance creation still will work, so i do not really understand, are these two values critical in creating a virtual machine ? and if not what exactly do they do ? Thank you -------------- next part -------------- An HTML attachment was scrubbed... URL: From klindgren at godaddy.com Mon Oct 31 11:17:11 2016 From: klindgren at godaddy.com (Kris G. Lindgren) Date: Mon, 31 Oct 2016 11:17:11 +0000 Subject: [Openstack-operators] Neutron Nova Notification and Virtual Interface plugging events In-Reply-To: References: Message-ID: <069CDE6B-1DCA-423C-9A4C-D2B54283004B@godaddy.com> The reason behind this was that in the past it took neutron a long time to plug a vif (especially true on large networks with many boot requests happening at the same time)... Iirc, it was updating the dhcp server that was the slowest operation. Before the nova/neutron notifications nova would start a vm's and assume that the vif plug operation was instant. This would result in a vm's booting up potentially without networking. So now nova will create the vm, pause the vm, wait for neutron to say it's finished plugging the vif, then unpause the vm. Ensuring that all vm's will have networking by the time the is boots. In nova if you set vif plug is fatal and a timeout, the vm's will fail to boot if neutron hasn't plugged the vif within the timeout value. Setting vif plug is fatal to false runs with the old behavior in that nova will just boot the vm, I do not remember if it waits for the timeout value and continues or if it waits at all if is fatal is not set. Sent from my iPad > On Oct 31, 2016, at 10:36 AM, Ahmed Mostafa wrote: > > Hello all, > > I am a bit confuse by the following parameters : > > notify_nova_on_port_status_change > notify_nova_on_port_data_change > > The reason i am confused is that i see if there either of these values is set to true (Which is the default), neutron will create a notiva notifier and send these events to nova. > > But i can not understand that part from the nova side, because as i see, when you create a virtual machine you basically call > > _do_build_and_run_instance, which calls _create_domain_and_network > > These two methods create a virtual machine and a port, then attach that port on the virtual machine > > but in _create_domain_and_network, is see nova waiting for neutron to create the port and it has a neutron_callbak_fail, if neutron should fail in creating the port. > > Now, if i set these values to false, instance creation still will work, so i do not really understand, are these two values critical in creating a virtual machine ? and if not what exactly do they do ? > > Thank you > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators From aj at suse.com Mon Oct 31 16:51:44 2016 From: aj at suse.com (Andreas Jaeger) Date: Mon, 31 Oct 2016 17:51:44 +0100 Subject: [Openstack-operators] Question about ancient published OPS and Architecture guides Message-ID: <0e2a88ee-cf0e-f4d6-f61f-355a81ef529c@suse.com> Operators, a quick question from the docs team: We currently publish a frozen epub and mobi version of the O'Reilly Operations Guide - in the version from 20th May 2014. This is now quite different from the HTML version. The same for the Architecture Design Guide. Our epub is frozen and from from 30th October 2014. We plan to add current PDFs for these documents in the Ocata cycle. Is there any reason these ancient epub/mobi versions should still get published? Andreas -- Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 N?rnberg, Germany GF: Felix Imend?rffer, Jane Smithard, Graham Norton, HRB 21284 (AG N?rnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From omgjlk at us.ibm.com Mon Oct 31 17:10:10 2016 From: omgjlk at us.ibm.com (Jesse Keating) Date: Mon, 31 Oct 2016 17:10:10 +0000 Subject: [Openstack-operators] 2017 Openstack Operators Mid-Cycle Meetups - venue selection etherpads In-Reply-To: <9be7d001-d464-74f7-afdb-38aa277032dc@redhat.com> References: <9be7d001-d464-74f7-afdb-38aa277032dc@redhat.com>, <8b7bf680-9e4a-3bf7-2670-c9db053597cb@openstack.org> Message-ID: An HTML attachment was scrubbed... URL: From jon at csail.mit.edu Mon Oct 31 17:10:58 2016 From: jon at csail.mit.edu (Jonathan D. Proulx) Date: Mon, 31 Oct 2016 13:10:58 -0400 Subject: [Openstack-operators] Question about ancient published OPS and Architecture guides In-Reply-To: <0e2a88ee-cf0e-f4d6-f61f-355a81ef529c@suse.com> References: <0e2a88ee-cf0e-f4d6-f61f-355a81ef529c@suse.com> Message-ID: <20161031171058.GB19227@csail.mit.edu> I always use the HTML versions and can't think of a case where I'd want the epub or mobi. If they are also out dated I definitly think they should be removed just to prevent confusion. If there's a wider desire for these formats (which I doubt) then they'd need to be published much more frequently. I would be surprised if there were a need for this and just dropping them is likely the best option. -Jon On Mon, Oct 31, 2016 at 05:51:44PM +0100, Andreas Jaeger wrote: :Operators, a quick question from the docs team: : :We currently publish a frozen epub and mobi version of the O'Reilly :Operations Guide - in the version from 20th May 2014. This is now quite :different from the HTML version. : :The same for the Architecture Design Guide. Our epub is frozen and from :from 30th October 2014. : :We plan to add current PDFs for these documents in the Ocata cycle. : :Is there any reason these ancient epub/mobi versions should still get :published? : :Andreas :-- : Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi : SUSE LINUX GmbH, Maxfeldstr. 5, 90409 N?rnberg, Germany : GF: Felix Imend?rffer, Jane Smithard, Graham Norton, : HRB 21284 (AG N?rnberg) : GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 : : :_______________________________________________ :OpenStack-operators mailing list :OpenStack-operators at lists.openstack.org :http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators -- From edgar.magana at workday.com Mon Oct 31 17:24:52 2016 From: edgar.magana at workday.com (Edgar Magana) Date: Mon, 31 Oct 2016 17:24:52 +0000 Subject: [Openstack-operators] 2017 Openstack Operators Mid-Cycle Meetups - venue selection etherpads In-Reply-To: <9be7d001-d464-74f7-afdb-38aa277032dc@redhat.com> References: <8b7bf680-9e4a-3bf7-2670-c9db053597cb@openstack.org> <9be7d001-d464-74f7-afdb-38aa277032dc@redhat.com> Message-ID: <8B2C4957-D94A-41A0-A420-326F772FC104@workday.com> Hello, The PTG is for developers to work on items needed for the release and to get together in person to discuss any open topics that they could not resolved during the IRC meetings. The Forum which is at the same week and place that the OpenStack Conference (for instance the next one in Boston) will provide the opportunity to provide direct feedback to the developers on the latest release and features for each project. More details here: https://www.openstack.org/ptg/ This is why we still need to have the Ops mid-cycle Meet-up. This is discussed and clarified during the Ops session in BCN. Now, when I mentioned above ?PTG is for developers? I do not want to exclude anyone. If you feel like attending the PTG I am pretty sure you will be welcome as long as the space is large enough for the ATC members. BTW. When we compare last time the number of AUCs that are also ATC we found that a total of 38% are both. Thanks, Edgar On 10/27/16, 5:31 AM, "Matthias Runge" wrote: On 27/10/16 11:08, Erik McCormick wrote: > The PTG is for devs to get together and get real work done. We would be > a distraction from that goal. They will also be attending the forum > which will run with the summits and will be able to spend more time in > groups with ops for requirements gathering and such. > > -Erik > > > On Oct 27, 2016 11:05 AM, "Jesse Keating" > wrote: > > I may have missed something, but why aren't we meeting at the > Project Technical Gathering, which is at the end of February in Atlanta? From my experience with OpenStack, feedback from operators have been invaluable. You can easily run things in devstack (or all-in-one deployments), but this is completely different from running in scale. Operators do tell you, were the pain-points are. Having a dedicated gathering without involving actual operators/users is not that useful IMO. Matthias -- Matthias Runge Red Hat GmbH, https://urldefense.proofpoint.com/v2/url?u=http-3A__www.de.redhat.com_&d=DQICAg&c=DS6PUFBBr_KiLo7Sjt3ljp5jaW5k2i9ijVXllEdOozc&r=G0XRJfDQsuBvqa_wpWyDAUlSpeMV4W1qfWqBfctlWwQ&m=czgkClZlG5EKd10fcaL9EJL9HH6sy9i1cWQg_-EV5l8&s=HiIUctBcGN1WC58PVKdrPJ8A_QVJ-Ta9MQgDMFEFqpE&e= , Registered seat: Grasbrunn, Commercial register: Amtsgericht Muenchen, HRB 153243, Managing Directors: Charles Cachera, Michael Cunningham, Michael O'Neill, Eric Shander _______________________________________________ OpenStack-operators mailing list OpenStack-operators at lists.openstack.org https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DQICAg&c=DS6PUFBBr_KiLo7Sjt3ljp5jaW5k2i9ijVXllEdOozc&r=G0XRJfDQsuBvqa_wpWyDAUlSpeMV4W1qfWqBfctlWwQ&m=czgkClZlG5EKd10fcaL9EJL9HH6sy9i1cWQg_-EV5l8&s=XYzOkIHTK7m25PpH5i-ibXKmcdfIMG2H0i4wdSaxuA0&e= From lutz.birkhahn at noris.de Mon Oct 31 18:33:36 2016 From: lutz.birkhahn at noris.de (Lutz Birkhahn) Date: Mon, 31 Oct 2016 18:33:36 +0000 Subject: [Openstack-operators] Question about ancient published OPS and Architecture guides In-Reply-To: <20161031171058.GB19227@csail.mit.edu> References: <0e2a88ee-cf0e-f4d6-f61f-355a81ef529c@suse.com> <20161031171058.GB19227@csail.mit.edu> Message-ID: <10FB6D86-167F-4BDC-9788-C4DA30411465@noris.de> Hi, I have already manually created PDF versions of about 8 of the OpenStack Manuals (within about 4-6 hours including setting up the tool chain and locally fixing some bugs), and working on getting the rest done (at least those that are in the openstack-manuals.git repository) within the next few weeks, and make them available to the public. I?m currently working on an at least 3-phase approach: phase 1) get as many of the docs in git (openstack-manuals.git ) as possible (mostly manually) converted to PDF and publish an URL where you can download them. phase 2) set up a local build pipeline in our own OpenStack cloud to regularly convert the latest git versions to PDF e.g. every night, and publish them to the same location, possibly also providing docs for different versions (e.g. mitaka, neutron, ocata) phase 3) work with the docs PTL (Lana) or whoever can help with it to set up the build process on the regular OpenStack / Ubuntu or whatever build environments so that the build process possibly could run on the standard build servers, and no longer on our own machines. Maybe the PDF version will not yet be a gate in the build process, but it should at least be flagged as a warning when there are errors, so the right people can look into it and try to fix it soon, without holding up the rest of the build and release process. I was about to contact Lana in Barcelona, and we did meet 2 times, but we were both too busy with other meetings so didn?t get to talk about this in Barcelona, but I should be able to track her down on IRC or email or some other way soon (hopefully, if schedule permits it ;-) ) I absolutely see a case for PDF files, maybe some time also epub or mobi, and the tool chain already includes Sphinx as far as I know, which already provides the ability to create (La)TeX files which then can easily be typeset into PDF format, probably a few others as well (unfortunately I also didn?t have time to track down the Sphinx author, but at least got a lead on that). HTML is fine for online viewing, but any time you sit in an airplane (e.g. from or to the Summit) or in a train with bad Internet connectivity, you?d need to download the whole HTML source tree, which is much more of a hassle than if you could just download a PDF or e-book file. Also even in todays time there are still people who prefer a printed copy rather than some online doc, e.g. for sitting at the couch and have the feeling of real paper in your hand, or for taking it to the beach. I?m thinking about setting up a link somewhere on the docs site where you can order a printed copy (e.g. some books-on-demand provider) where you can at any time order a printed version of the latest doc version. I?ve even ran into to a ?collector? type of person in Barcelona who likes to have all the books, but usually doesn?t even have time to read them, just the good feeling of having a lot of beautiful or interesting books? Sure, this is not everybody?s opinion about book formats, and many just like the HTML version (which will of course stay nevertheless), but if there are only 2 to 5 percent of all OpenStack users who?d like a PDF or printed version, this will still be in the hundreds I?d guess, maybe thousands I also urgently request that the existing .Epub and .Mobi versions are kept at least in some ?archives? location, since those are so far the only examples (that I know of) of carefully edited versions of the book, even though they are a bit outdated. Not sure if O?Reilly has some sort of copyright on the looks of the Ops book (we certainly cannot copy the front page with the "crested agouti? animal), but in my opinion it can at least be used as an example to how the future PDF and printed versions of the Ops book might look like, also including Table of Contents, an Index, and a Colophon. I will certainly keep a copy of these 2 files around, and I strongly suggest to keep a copy on some publicly available location (if need be, I will provide that copy on our servers and make it available to anyone interested in them). Just my 2 cents, and no, I?m not yet committing to all of this, just my current thoughts (Steve Martinelli, I heard you in the panel? "Do not over commit!?) Cheers, /lutz > On 31 Oct 2016, at 18:10, Jonathan D. Proulx wrote: > > > I always use the HTML versions and can't think of a case where I'd > want the epub or mobi. > > If they are also out dated I definitly think they should be removed > just to prevent confusion. > > If there's a wider desire for these formats (which I doubt) then > they'd need to be published much more frequently. I would be > surprised if there were a need for this and just dropping them is > likely the best option. > > -Jon > > On Mon, Oct 31, 2016 at 05:51:44PM +0100, Andreas Jaeger wrote: > :Operators, a quick question from the docs team: > : > :We currently publish a frozen epub and mobi version of the O'Reilly > :Operations Guide - in the version from 20th May 2014. This is now quite > :different from the HTML version. > : > :The same for the Architecture Design Guide. Our epub is frozen and from > :from 30th October 2014. > : > :We plan to add current PDFs for these documents in the Ocata cycle. > : > :Is there any reason these ancient epub/mobi versions should still get > :published? > : > :Andreas > :-- > : Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi > : SUSE LINUX GmbH, Maxfeldstr. 5, 90409 N?rnberg, Germany > : GF: Felix Imend?rffer, Jane Smithard, Graham Norton, > : HRB 21284 (AG N?rnberg) > : GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 > : > : > :_______________________________________________ > :OpenStack-operators mailing list > :OpenStack-operators at lists.openstack.org > :http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > -- > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/x-pkcs7-signature Size: 6404 bytes Desc: not available URL: From aj at suse.com Mon Oct 31 18:44:47 2016 From: aj at suse.com (Andreas Jaeger) Date: Mon, 31 Oct 2016 19:44:47 +0100 Subject: [Openstack-operators] Question about ancient published OPS and Architecture guides In-Reply-To: <10FB6D86-167F-4BDC-9788-C4DA30411465@noris.de> References: <0e2a88ee-cf0e-f4d6-f61f-355a81ef529c@suse.com> <20161031171058.GB19227@csail.mit.edu> <10FB6D86-167F-4BDC-9788-C4DA30411465@noris.de> Message-ID: <4bc1099e-2382-4fe4-4411-754be52b0f26@suse.com> On 10/31/2016 07:33 PM, Lutz Birkhahn wrote: > Hi, > > I have already manually created PDF versions of about 8 of the OpenStack Manuals (within about 4-6 hours including setting up the tool chain and locally fixing some bugs), and working on getting the rest done (at least those that are in the openstack-manuals.git repository) within the next few weeks, and make them available to the public. I?m currently working on an at least 3-phase approach: Lutz, see http://specs.openstack.org/openstack/docs-specs/specs/ocata/build-pdf-from-rst-guides.html Our goal is to publish these PDFs whenever we publish the HTML - so have always current version. > phase 1) get as many of the docs in git (openstack-manuals.git ) as possible (mostly manually) converted to PDF and publish an URL where you can download them. > > phase 2) set up a local build pipeline in our own OpenStack cloud to regularly convert the latest git versions to PDF e.g. every night, and publish them to the same location, possibly also providing docs for different versions (e.g. mitaka, neutron, ocata) > > phase 3) work with the docs PTL (Lana) or whoever can help with it to set up the build process on the regular OpenStack / Ubuntu or whatever build environments so that the build process possibly could run on the standard build servers, and no longer on our own machines. Maybe the PDF version will not yet be a gate in the build process, but it should at least be flagged as a warning when there are errors, so the right people can look into it and try to fix it soon, without holding up the rest of the build and release process. See the referenced specs - and help Ian and others please. > I was about to contact Lana in Barcelona, and we did meet 2 times, but we were both too busy with other meetings so didn?t get to talk about this in Barcelona, but I should be able to track her down on IRC or email or some other way soon (hopefully, if schedule permits it ;-) ) > > I absolutely see a case for PDF files, maybe some time also epub or mobi, and the tool chain already includes Sphinx as far as I know, which already provides the ability to create (La)TeX files which then can easily be typeset into PDF format, probably a few others as well (unfortunately I also didn?t have time to track down the Sphinx author, but at least got a lead on that). > > HTML is fine for online viewing, but any time you sit in an airplane (e.g. from or to the Summit) or in a train with bad Internet connectivity, you?d need to download the whole HTML source tree, which is much more of a hassle than if you could just download a PDF or e-book file. > > Also even in todays time there are still people who prefer a printed copy rather than some online doc, e.g. for sitting at the couch and have the feeling of real paper in your hand, or for taking it to the beach. I?m thinking about setting up a link somewhere on the docs site where you can order a printed copy (e.g. some books-on-demand provider) where you can at any time order a printed version of the latest doc version. I?ve even ran into to a ?collector? type of person in Barcelona who likes to have all the books, but usually doesn?t even have time to read them, just the good feeling of having a lot of beautiful or interesting books? Sure, this is not everybody?s opinion about book formats, and many just like the HTML version (which will of course stay nevertheless), but if there are only 2 to 5 percent of all OpenStack users who?d like a PDF or printed version, this will still be in the hundreds I?d guess, maybe thousands > > I also urgently request that the existing .Epub and .Mobi versions are kept at least in some ?archives? location, since those are so far the only examples (that I know of) of carefully edited versions of the book, even though they are a bit outdated. Not sure if O?Reilly has some sort of copyright on the looks of the Ops book (we certainly cannot copy the front page with the "crested agouti? animal), but in my opinion it can at least be used as an example to how the future PDF and printed versions of the Ops book might look like, also including Table of Contents, an Index, and a Colophon. Why would a 2 years old epub or mobi be beneficial for you - even in an archives location? Andreas > I will certainly keep a copy of these 2 files around, and I strongly suggest to keep a copy on some publicly available location (if need be, I will provide that copy on our servers and make it available to anyone interested in them). > > Just my 2 cents, and no, I?m not yet committing to all of this, just my current thoughts (Steve Martinelli, I heard you in the panel? "Do not over commit!?) > > Cheers, > > /lutz > > > >> On 31 Oct 2016, at 18:10, Jonathan D. Proulx wrote: >> >> >> I always use the HTML versions and can't think of a case where I'd >> want the epub or mobi. >> >> If they are also out dated I definitly think they should be removed >> just to prevent confusion. >> >> If there's a wider desire for these formats (which I doubt) then >> they'd need to be published much more frequently. I would be >> surprised if there were a need for this and just dropping them is >> likely the best option. >> >> -Jon >> >> On Mon, Oct 31, 2016 at 05:51:44PM +0100, Andreas Jaeger wrote: >> :Operators, a quick question from the docs team: >> : >> :We currently publish a frozen epub and mobi version of the O'Reilly >> :Operations Guide - in the version from 20th May 2014. This is now quite >> :different from the HTML version. >> : >> :The same for the Architecture Design Guide. Our epub is frozen and from >> :from 30th October 2014. >> : >> :We plan to add current PDFs for these documents in the Ocata cycle. >> : >> :Is there any reason these ancient epub/mobi versions should still get >> :published? >> : >> :Andreas >> :-- >> : Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi >> : SUSE LINUX GmbH, Maxfeldstr. 5, 90409 N?rnberg, Germany >> : GF: Felix Imend?rffer, Jane Smithard, Graham Norton, >> : HRB 21284 (AG N?rnberg) >> : GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 >> : >> : >> :_______________________________________________ >> :OpenStack-operators mailing list >> :OpenStack-operators at lists.openstack.org >> :http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> >> -- >> >> _______________________________________________ >> OpenStack-operators mailing list >> OpenStack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -- Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 N?rnberg, Germany GF: Felix Imend?rffer, Jane Smithard, Graham Norton, HRB 21284 (AG N?rnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From matt at nycresistor.com Mon Oct 31 18:48:40 2016 From: matt at nycresistor.com (Silence Dogood) Date: Mon, 31 Oct 2016 14:48:40 -0400 Subject: [Openstack-operators] Question about ancient published OPS and Architecture guides In-Reply-To: <4bc1099e-2382-4fe4-4411-754be52b0f26@suse.com> References: <0e2a88ee-cf0e-f4d6-f61f-355a81ef529c@suse.com> <20161031171058.GB19227@csail.mit.edu> <10FB6D86-167F-4BDC-9788-C4DA30411465@noris.de> <4bc1099e-2382-4fe4-4411-754be52b0f26@suse.com> Message-ID: you know how many folks are STILL running havana openstack? On Mon, Oct 31, 2016 at 2:44 PM, Andreas Jaeger wrote: > On 10/31/2016 07:33 PM, Lutz Birkhahn wrote: > > Hi, > > > > I have already manually created PDF versions of about 8 of the OpenStack > Manuals (within about 4-6 hours including setting up the tool chain and > locally fixing some bugs), and working on getting the rest done (at least > those that are in the openstack-manuals.git repository) within the next few > weeks, and make them available to the public. I?m currently working on an > at least 3-phase approach: > > Lutz, see > http://specs.openstack.org/openstack/docs-specs/specs/ > ocata/build-pdf-from-rst-guides.html > > Our goal is to publish these PDFs whenever we publish the HTML - so have > always current version. > > > phase 1) get as many of the docs in git (openstack-manuals.git ) as > possible (mostly manually) converted to PDF and publish an URL where you > can download them. > > > > phase 2) set up a local build pipeline in our own OpenStack cloud to > regularly convert the latest git versions to PDF e.g. every night, and > publish them to the same location, possibly also providing docs for > different versions (e.g. mitaka, neutron, ocata) > > > > phase 3) work with the docs PTL (Lana) or whoever can help with it to > set up the build process on the regular OpenStack / Ubuntu or whatever > build environments so that the build process possibly could run on the > standard build servers, and no longer on our own machines. Maybe the PDF > version will not yet be a gate in the build process, but it should at least > be flagged as a warning when there are errors, so the right people can look > into it and try to fix it soon, without holding up the rest of the build > and release process. > > See the referenced specs - and help Ian and others please. > > > I was about to contact Lana in Barcelona, and we did meet 2 times, but > we were both too busy with other meetings so didn?t get to talk about this > in Barcelona, but I should be able to track her down on IRC or email or > some other way soon (hopefully, if schedule permits it ;-) ) > > > > I absolutely see a case for PDF files, maybe some time also epub or > mobi, and the tool chain already includes Sphinx as far as I know, which > already provides the ability to create (La)TeX files which then can easily > be typeset into PDF format, probably a few others as well (unfortunately I > also didn?t have time to track down the Sphinx author, but at least got a > lead on that). > > > > HTML is fine for online viewing, but any time you sit in an airplane > (e.g. from or to the Summit) or in a train with bad Internet connectivity, > you?d need to download the whole HTML source tree, which is much more of a > hassle than if you could just download a PDF or e-book file. > > > > Also even in todays time there are still people who prefer a printed > copy rather than some online doc, e.g. for sitting at the couch and have > the feeling of real paper in your hand, or for taking it to the beach. I?m > thinking about setting up a link somewhere on the docs site where you can > order a printed copy (e.g. some books-on-demand provider) where you can at > any time order a printed version of the latest doc version. I?ve even ran > into to a ?collector? type of person in Barcelona who likes to have all the > books, but usually doesn?t even have time to read them, just the good > feeling of having a lot of beautiful or interesting books? Sure, this is > not everybody?s opinion about book formats, and many just like the HTML > version (which will of course stay nevertheless), but if there are only 2 > to 5 percent of all OpenStack users who?d like a PDF or printed version, > this will still be in the hundreds I?d guess, maybe thousands > > > > I also urgently request that the existing .Epub and .Mobi versions are > kept at least in some ?archives? location, since those are so far the only > examples (that I know of) of carefully edited versions of the book, even > though they are a bit outdated. Not sure if O?Reilly has some sort of > copyright on the looks of the Ops book (we certainly cannot copy the front > page with the "crested agouti? animal), but in my opinion it can at least > be used as an example to how the future PDF and printed versions of the Ops > book might look like, also including Table of Contents, an Index, and a > Colophon. > > > Why would a 2 years old epub or mobi be beneficial for you - even in an > archives location? > > Andreas > > > I will certainly keep a copy of these 2 files around, and I strongly > suggest to keep a copy on some publicly available location (if need be, I > will provide that copy on our servers and make it available to anyone > interested in them). > > > > Just my 2 cents, and no, I?m not yet committing to all of this, just my > current thoughts (Steve Martinelli, I heard you in the panel? "Do not over > commit!?) > > > > Cheers, > > > > /lutz > > > > > > > >> On 31 Oct 2016, at 18:10, Jonathan D. Proulx wrote: > >> > >> > >> I always use the HTML versions and can't think of a case where I'd > >> want the epub or mobi. > >> > >> If they are also out dated I definitly think they should be removed > >> just to prevent confusion. > >> > >> If there's a wider desire for these formats (which I doubt) then > >> they'd need to be published much more frequently. I would be > >> surprised if there were a need for this and just dropping them is > >> likely the best option. > >> > >> -Jon > >> > >> On Mon, Oct 31, 2016 at 05:51:44PM +0100, Andreas Jaeger wrote: > >> :Operators, a quick question from the docs team: > >> : > >> :We currently publish a frozen epub and mobi version of the O'Reilly > >> :Operations Guide - in the version from 20th May 2014. This is now quite > >> :different from the HTML version. > >> : > >> :The same for the Architecture Design Guide. Our epub is frozen and from > >> :from 30th October 2014. > >> : > >> :We plan to add current PDFs for these documents in the Ocata cycle. > >> : > >> :Is there any reason these ancient epub/mobi versions should still get > >> :published? > >> : > >> :Andreas > >> :-- > >> : Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi > >> : SUSE LINUX GmbH, Maxfeldstr. 5, 90409 N?rnberg, Germany > >> : GF: Felix Imend?rffer, Jane Smithard, Graham Norton, > >> : HRB 21284 (AG N?rnberg) > >> : GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 > A126 > >> : > >> : > >> :_______________________________________________ > >> :OpenStack-operators mailing list > >> :OpenStack-operators at lists.openstack.org > >> :http://lists.openstack.org/cgi-bin/mailman/listinfo/ > openstack-operators > >> > >> -- > >> > >> _______________________________________________ > >> OpenStack-operators mailing list > >> OpenStack-operators at lists.openstack.org > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > > > -- > Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi > SUSE LINUX GmbH, Maxfeldstr. 5, 90409 N?rnberg, Germany > GF: Felix Imend?rffer, Jane Smithard, Graham Norton, > HRB 21284 (AG N?rnberg) > GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.friesen at windriver.com Mon Oct 31 19:21:12 2016 From: chris.friesen at windriver.com (Chris Friesen) Date: Mon, 31 Oct 2016 13:21:12 -0600 Subject: [Openstack-operators] Live-migration CPU doesn't have compatibility In-Reply-To: References: <5811C6C7.90207@windriver.com> Message-ID: <581799A8.7070002@windriver.com> On 10/27/2016 11:09 PM, William Josefsson wrote: > hi, I did 'virsh capabilities' on the Haswell, which turned out to > list model: Haswell-noTSX. So I set in nova.conf > cpu_model=Haswell-noTSX on both Haswell and Broadwell hosts and it > seems to work. I believe this is my smallest common denominator. Almost certainly, yes. Chris