From fawaz.moh.ibraheem at gmail.com Thu Feb 1 04:48:45 2018 From: fawaz.moh.ibraheem at gmail.com (Fawaz Mohammed) Date: Thu, 1 Feb 2018 08:48:45 +0400 Subject: [Openstack] Openstack neutron with ASR1k In-Reply-To: References: Message-ID: http://www.jimmdenton.com/networking-cisco-asr-install/ can beet your requirements. I have another plugin in production from different vendor for the same purpose. and it works perfect. Regarding your question about the license, usually, there is no license for such plugins. I've no production experience with DVR, and I don't recommend it in medium to large environment. On Thu, Feb 1, 2018 at 2:58 AM, Satish Patel wrote: > What about this? http://www.jimmdenton.com/networking-cisco-asr-part-two/ > > ML2 does use ASR too, > > Just curious what people mostly use in production? are they use DVR or > some kind of hardware for L3? > > > On Wed, Jan 31, 2018 at 4:02 PM, Fawaz Mohammed > wrote: > > Hi Santish, > > > > In my knowlege, Cisco has ml2 driver for Nexus only. > > > > So, if you have requirements for dynamic L3 provisioning / configuration, > > it's better to go with SDN solution. > > > > On Jan 31, 2018 11:39 PM, "Satish Patel" wrote: > >> > >> So no one using ASR 1001 for Openstack? > >> > >> On Tue, Jan 30, 2018 at 11:21 PM, Satish Patel > >> wrote: > >> > Folks, > >> > > >> > We are planning to deploy production style private cloud and gathering > >> > information about what we should use and why and i came across with > >> > couple of document related network node criticality and performance > >> > issue and many folks suggesting following > >> > > >> > 1. DVR (it seem complicated after reading, also need lots of public > IP) > >> > 2. Use ASR1k centralized router to use for L3 function (any idea what > >> > model should be good? or do we need any licensing to integrate with > >> > openstack?) > >> > > >> > Would like to get some input from folks who already using openstack in > >> > production and would like to know what kind of deployment they pick > >> > for network/neutron performance? > >> > >> _______________________________________________ > >> Mailing list: > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > >> Post to : openstack at lists.openstack.org > >> Unsubscribe : > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fawaz.moh.ibraheem at gmail.com Thu Feb 1 04:50:29 2018 From: fawaz.moh.ibraheem at gmail.com (Fawaz Mohammed) Date: Thu, 1 Feb 2018 08:50:29 +0400 Subject: [Openstack] Openstack neutron with ASR1k In-Reply-To: References: Message-ID: *can meet your requirements On Thu, Feb 1, 2018 at 8:48 AM, Fawaz Mohammed wrote: > http://www.jimmdenton.com/networking-cisco-asr-install/ can beet your > requirements. I have another plugin in production from different vendor for > the same purpose. and it works perfect. > > Regarding your question about the license, usually, there is no license > for such plugins. > > I've no production experience with DVR, and I don't recommend it in medium > to large environment. > > On Thu, Feb 1, 2018 at 2:58 AM, Satish Patel wrote: > >> What about this? http://www.jimmdenton.com/networking-cisco-asr-part-two/ >> >> ML2 does use ASR too, >> >> Just curious what people mostly use in production? are they use DVR or >> some kind of hardware for L3? >> >> >> On Wed, Jan 31, 2018 at 4:02 PM, Fawaz Mohammed >> wrote: >> > Hi Santish, >> > >> > In my knowlege, Cisco has ml2 driver for Nexus only. >> > >> > So, if you have requirements for dynamic L3 provisioning / >> configuration, >> > it's better to go with SDN solution. >> > >> > On Jan 31, 2018 11:39 PM, "Satish Patel" wrote: >> >> >> >> So no one using ASR 1001 for Openstack? >> >> >> >> On Tue, Jan 30, 2018 at 11:21 PM, Satish Patel >> >> wrote: >> >> > Folks, >> >> > >> >> > We are planning to deploy production style private cloud and >> gathering >> >> > information about what we should use and why and i came across with >> >> > couple of document related network node criticality and performance >> >> > issue and many folks suggesting following >> >> > >> >> > 1. DVR (it seem complicated after reading, also need lots of public >> IP) >> >> > 2. Use ASR1k centralized router to use for L3 function (any idea >> what >> >> > model should be good? or do we need any licensing to integrate with >> >> > openstack?) >> >> > >> >> > Would like to get some input from folks who already using openstack >> in >> >> > production and would like to know what kind of deployment they pick >> >> > for network/neutron performance? >> >> >> >> _______________________________________________ >> >> Mailing list: >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >> >> Post to : openstack at lists.openstack.org >> >> Unsubscribe : >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Remo at Italy1.com Thu Feb 1 07:16:44 2018 From: Remo at Italy1.com (Remo Mattei) Date: Thu, 1 Feb 2018 08:16:44 +0100 Subject: [Openstack] tripleO Error No valid host was found In-Reply-To: <20180131234803.GD23143@thor.bakeyournoodle.com> References: <20180131234803.GD23143@thor.bakeyournoodle.com> Message-ID: <1F20886F-70CE-4F60-BE1B-9BF0EE57A06B@Italy1.com> What are you running? Pike? Ocata? Pike option have changed so the flavor could be an issue now. You should look into that and see. Inviato da iPhone > Il giorno 01 feb 2018, alle ore 00:48, Tony Breeds ha scritto: > >> On Wed, Jan 31, 2018 at 06:10:11PM -0500, Satish Patel wrote: >> I am playing with tripleO and getting following error when deploying >> overcloud, I doing all this on VMware Workstation with fake_pxe >> driver, I did enable drive in ironic too. >> >> What could be wrong here? > > There's lots that could be wrong sadly. Testign under VMware is minimal > to none. There are some good tips at: > https://docs.openstack.org/tripleo-docs/latest/install/troubleshooting/troubleshooting.html > Specifically: > https://docs.openstack.org/tripleo-docs/latest/install/troubleshooting/troubleshooting-overcloud.html#no-valid-host-found-error > > Yours Tony. > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Post to : openstack at lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack From brune.lars at gmail.com Thu Feb 1 07:40:26 2018 From: brune.lars at gmail.com (Lars Brune) Date: Thu, 1 Feb 2018 08:40:26 +0100 Subject: [Openstack] tripleO Error No valid host was found In-Reply-To: <1F20886F-70CE-4F60-BE1B-9BF0EE57A06B@Italy1.com> References: <20180131234803.GD23143@thor.bakeyournoodle.com> <1F20886F-70CE-4F60-BE1B-9BF0EE57A06B@Italy1.com> Message-ID: <35E1C10A-6220-4689-8A95-D44EA437ADDE@gmail.com> Hi Satish, could you provide some more information about the hosts and flavors? I’ve made the experience that it sometimes helps to create the flavors with a bit of a margin to the „real“ hardware to avoid nova complaining about this. > Am 01.02.2018 um 08:16 schrieb Remo Mattei : > > What are you running? Pike? Ocata? > > Pike option have changed so the flavor could be an issue now. You should look into that and see. > > Inviato da iPhone > >> Il giorno 01 feb 2018, alle ore 00:48, Tony Breeds ha scritto: >> >>> On Wed, Jan 31, 2018 at 06:10:11PM -0500, Satish Patel wrote: >>> I am playing with tripleO and getting following error when deploying >>> overcloud, I doing all this on VMware Workstation with fake_pxe >>> driver, I did enable drive in ironic too. >>> >>> What could be wrong here? >> >> There's lots that could be wrong sadly. Testign under VMware is minimal >> to none. There are some good tips at: >> https://docs.openstack.org/tripleo-docs/latest/install/troubleshooting/troubleshooting.html >> Specifically: >> https://docs.openstack.org/tripleo-docs/latest/install/troubleshooting/troubleshooting-overcloud.html#no-valid-host-found-error >> >> Yours Tony. >> _______________________________________________ >> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >> Post to : openstack at lists.openstack.org >> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > > > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Post to : openstack at lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack From doka.ua at gmx.com Thu Feb 1 11:30:18 2018 From: doka.ua at gmx.com (Volodymyr Litovka) Date: Thu, 1 Feb 2018 13:30:18 +0200 Subject: [Openstack] OpenVSwitch inside Instance no ARP passthrough In-Reply-To: <9e663e326f138cf141d11964764388f1@projects.dfki.uni-kl.de> References: <19e2c014fb8d332bdb3518befce68a37@projects.dfki.uni-kl.de> <11ea9728-9446-2d8c-db3f-f5712e891af4@gmx.com> <9e663e326f138cf141d11964764388f1@projects.dfki.uni-kl.de> Message-ID: <7da0834a-12f7-ce79-db48-87c4058040cd@gmx.com> Dear Mathias, if I correctly understand your configuration, you're using bridges inside VM and it configuration looks a bit strange: 1) you use two different bridges (OVSbr1/192.168.120.x and OVSbr2/192.168.110.x) and there is no patch between them so they're separate 2) while ARP requests for address in OVSbr1 arrives from OVSbr2: > 18:50:58.080478 ARP, Request who-has *192.168.120.10* tell 192.168.120.6, length 28 > > but on the OVS bridge nothing arrives ... > > listening on *OVSbr2*, link-type EN10MB (Ethernet), capture size > 262144 bytes while these bridges are separate, ARP requests and answers will not be passed between them. Regarding your devstack configuration - unfortunately, I don't have experience with devstack, so don't know, where it stores configs. In Openstack, ml2_conf.ini points to openvswitch in ml2's mechanism_drivers parameter, in my case it looks as the following: [ml2] mechanism_drivers = l2population,openvswitch and rest of openvswitch config described in /etc/neutron/plugins/ml2/openvswitch_agent.ini Second - I see an ambiguity in your br-tun configuration, where patch_int is the same as patch-int without corresponding remote peer config, probably you should check this issue. And third is - note that Mitaka is quite old release and probably you can give a chance for the latest release of devstack? :-) On 1/31/18 10:49 PM, Mathias Strufe (DFKI) wrote: > Dear Volodymyr, all, > > thanks for your fast answer ... > but I'm still facing the same problem, still can't ping the instance > with configured and up OVS bridge ... may because I'm quite new to > OpenStack and OpenVswitch and didn't see the problem ;) > > My setup is devstack Mitaka in single machine config ... first of all > I didn't find there the openvswitch_agent.ini anymore, I remember in > previous version it was in the neutron/plugin folder ... > Is this config now done in the ml2 config file in the [OVS] section???? > > > I'm really wondering ... > so I can ping between the 2 instances without any problem. But as soon > I bring up the OVS bridge inside the vm the ARP requests only visible > at the ens interface but not reaching the OVSbr ... > > please find attached two files which may help for troubleshooting. One > are some network information from inside the Instance that runs the > OVS and one ovs-vsctl info of the OpenStack Host. > > If you need more info/logs please let me know! Thanks for your help! > > BR Mathias. > > > On 2018-01-27 22:44, Volodymyr Litovka wrote: >> Hi Mathias, >> >>  whether you have all corresponding bridges and patches between them >> as described in openvswitch_agent.ini using >> >>  integration_bridge >>  tunnel_bridge >>  int_peer_patch_port >>  tun_peer_patch_port >>  bridge_mappings >> >>  parameters? And make sure, that service "neutron-ovs-cleanup" is in >> use during system boot. You can check these bridges and patches using >> "ovs-vsctl show" command. >> >> On 1/27/18 9:00 PM, Mathias Strufe (DFKI) wrote: >> >>> Dear all, >>> >>> I'm quite new to openstack and like to install openVSwtich inside >>> one Instance of our Mitika openstack Lab Enviornment ... >>> But it seems that ARP packets got lost between the network >>> interface of the instance and the OVS bridge ... >>> >>> With tcpdump on the interface I see the APR packets ... >>> >>> tcpdump: verbose output suppressed, use -v or -vv for full protocol >>> decode >>> listening on ens6, link-type EN10MB (Ethernet), capture size 262144 >>> bytes >>> 18:50:58.080478 ARP, Request who-has 192.168.120.10 tell >>> 192.168.120.6, length 28 >>> 18:50:58.125009 ARP, Request who-has 192.168.120.1 tell >>> 192.168.120.6, length 28 >>> 18:50:59.077315 ARP, Request who-has 192.168.120.10 tell >>> 192.168.120.6, length 28 >>> 18:50:59.121369 ARP, Request who-has 192.168.120.1 tell >>> 192.168.120.6, length 28 >>> 18:51:00.077327 ARP, Request who-has 192.168.120.10 tell >>> 192.168.120.6, length 28 >>> 18:51:00.121343 ARP, Request who-has 192.168.120.1 tell >>> 192.168.120.6, length 28 >>> >>> but on the OVS bridge nothing arrives ... >>> >>> tcpdump: verbose output suppressed, use -v or -vv for full protocol >>> decode >>> listening on OVSbr2, link-type EN10MB (Ethernet), capture size >>> 262144 bytes >>> >>> I disabled port_security and removed the security group but nothing >>> changed >>> >>> >> +-----------------------+---------------------------------------------------------------------------------------+ >> >>> >>> | Field | Value >>> | >>> >> +-----------------------+---------------------------------------------------------------------------------------+ >> >>> >>> | admin_state_up | True >>> | >>> | allowed_address_pairs | >>> | >>> | binding:host_id | node11 >>> | >>> | binding:profile | {} >>> | >>> | binding:vif_details | {"port_filter": true, "ovs_hybrid_plug": >>> true} | >>> | binding:vif_type | ovs >>> | >>> | binding:vnic_type | normal >>> | >>> | created_at | 2018-01-27T16:45:48Z >>> | >>> | description | >>> | >>> | device_id | 74916967-984c-4617-ae33-b847de73de13 >>> | >>> | device_owner | compute:nova >>> | >>> | extra_dhcp_opts | >>> | >>> | fixed_ips | {"subnet_id": >>> "525db7ff-2bf2-4c64-b41e-1e41570ec358", "ip_address": >>> "192.168.120.10"} | >>> | id | 74b754d6-0000-4c2e-bfd1-87f640154ac9 >>> | >>> | mac_address | fa:16:3e:af:90:0c >>> | >>> | name | >>> | >>> | network_id | 917254cb-9721-4207-99c5-8ead9f95d186 >>> | >>> | port_security_enabled | False >>> | >>> | project_id | c48457e73b664147a3d2d36d75dcd155 >>> | >>> | revision_number | 27 >>> | >>> | security_groups | >>> | >>> | status | ACTIVE >>> | >>> | tenant_id | c48457e73b664147a3d2d36d75dcd155 >>> | >>> | updated_at | 2018-01-27T18:54:24Z >>> | >>> >> +-----------------------+---------------------------------------------------------------------------------------+ >> >>> >>> >>> maybe the port_filter causes still the problem? But how to disable >>> it? >>> >>> Any other idea? >>> >>> Thanks and BR Mathias. >>> >>> _______________________________________________ >>> Mailing list: >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack [1] >>> Post to : openstack at lists.openstack.org >>> Unsubscribe : >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack [1] >> >> -- >> Volodymyr Litovka >>  "Vision without Execution is Hallucination." -- Thomas Edison >> >> >> Links: >> ------ >> [1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > -- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison -------------- next part -------------- An HTML attachment was scrubbed... URL: From satish.txt at gmail.com Thu Feb 1 13:04:00 2018 From: satish.txt at gmail.com (Satish Patel) Date: Thu, 1 Feb 2018 08:04:00 -0500 Subject: [Openstack] Openstack neutron with ASR1k In-Reply-To: References: Message-ID: Hi Fawaz, Great so you think I'm doing right thing to use ASR1k for my L3 function? I'm using DVR on my lab with 3 node cluster and it's kinda complicated to troubleshoot sometime. We are planning to build 20 node cluster for start and started playing with tripleo to deploy and having lots of issue with tripleo too :( I was also thinking to use mirantis but they stopped development. Based on your experience what most of folks use to deploy middle size cloud? What about you? Sent from my iPhone > On Jan 31, 2018, at 11:48 PM, Fawaz Mohammed wrote: > > http://www.jimmdenton.com/networking-cisco-asr-install/ can beet your requirements. I have another plugin in production from different vendor for the same purpose. and it works perfect. > > Regarding your question about the license, usually, there is no license for such plugins. > > I've no production experience with DVR, and I don't recommend it in medium to large environment. > >> On Thu, Feb 1, 2018 at 2:58 AM, Satish Patel wrote: >> What about this? http://www.jimmdenton.com/networking-cisco-asr-part-two/ >> >> ML2 does use ASR too, >> >> Just curious what people mostly use in production? are they use DVR or >> some kind of hardware for L3? >> >> >> On Wed, Jan 31, 2018 at 4:02 PM, Fawaz Mohammed >> wrote: >> > Hi Santish, >> > >> > In my knowlege, Cisco has ml2 driver for Nexus only. >> > >> > So, if you have requirements for dynamic L3 provisioning / configuration, >> > it's better to go with SDN solution. >> > >> > On Jan 31, 2018 11:39 PM, "Satish Patel" wrote: >> >> >> >> So no one using ASR 1001 for Openstack? >> >> >> >> On Tue, Jan 30, 2018 at 11:21 PM, Satish Patel >> >> wrote: >> >> > Folks, >> >> > >> >> > We are planning to deploy production style private cloud and gathering >> >> > information about what we should use and why and i came across with >> >> > couple of document related network node criticality and performance >> >> > issue and many folks suggesting following >> >> > >> >> > 1. DVR (it seem complicated after reading, also need lots of public IP) >> >> > 2. Use ASR1k centralized router to use for L3 function (any idea what >> >> > model should be good? or do we need any licensing to integrate with >> >> > openstack?) >> >> > >> >> > Would like to get some input from folks who already using openstack in >> >> > production and would like to know what kind of deployment they pick >> >> > for network/neutron performance? >> >> >> >> _______________________________________________ >> >> Mailing list: >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >> >> Post to : openstack at lists.openstack.org >> >> Unsubscribe : >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bdiaz at whitestack.com Thu Feb 1 13:28:15 2018 From: bdiaz at whitestack.com (Benjamin Diaz) Date: Thu, 1 Feb 2018 10:28:15 -0300 Subject: [Openstack] OpenVSwitch inside Instance no ARP passthrough In-Reply-To: <7da0834a-12f7-ce79-db48-87c4058040cd@gmx.com> References: <19e2c014fb8d332bdb3518befce68a37@projects.dfki.uni-kl.de> <11ea9728-9446-2d8c-db3f-f5712e891af4@gmx.com> <9e663e326f138cf141d11964764388f1@projects.dfki.uni-kl.de> <7da0834a-12f7-ce79-db48-87c4058040cd@gmx.com> Message-ID: Dear Mathias, Could you attach a diagram of your network configuration and of what you are trying to achieve? Are you trying to install OVS inside a VM? If so, why? Greetings, Benjamin On Thu, Feb 1, 2018 at 8:30 AM, Volodymyr Litovka wrote: > Dear Mathias, > > if I correctly understand your configuration, you're using bridges inside > VM and it configuration looks a bit strange: > > 1) you use two different bridges (OVSbr1/192.168.120.x and > OVSbr2/192.168.110.x) and there is no patch between them so they're separate > 2) while ARP requests for address in OVSbr1 arrives from OVSbr2: > > > 18:50:58.080478 ARP, Request who-has *192.168.120.10* tell > 192.168.120.6, length 28 > > > > but on the OVS bridge nothing arrives ... > > > > listening on *OVSbr2*, link-type EN10MB (Ethernet), capture size > > 262144 bytes > > while these bridges are separate, ARP requests and answers will not be > passed between them. > > Regarding your devstack configuration - unfortunately, I don't have > experience with devstack, so don't know, where it stores configs. In > Openstack, ml2_conf.ini points to openvswitch in ml2's mechanism_drivers > parameter, in my case it looks as the following: > > [ml2] > mechanism_drivers = l2population,openvswitch > > and rest of openvswitch config described in /etc/neutron/plugins/ml2/ > openvswitch_agent.ini > > Second - I see an ambiguity in your br-tun configuration, where patch_int > is the same as patch-int without corresponding remote peer config, probably > you should check this issue. > > And third is - note that Mitaka is quite old release and probably you can > give a chance for the latest release of devstack? :-) > > > On 1/31/18 10:49 PM, Mathias Strufe (DFKI) wrote: > > Dear Volodymyr, all, > > thanks for your fast answer ... > but I'm still facing the same problem, still can't ping the instance with > configured and up OVS bridge ... may because I'm quite new to OpenStack and > OpenVswitch and didn't see the problem ;) > > My setup is devstack Mitaka in single machine config ... first of all I > didn't find there the openvswitch_agent.ini anymore, I remember in previous > version it was in the neutron/plugin folder ... > Is this config now done in the ml2 config file in the [OVS] section???? > > > I'm really wondering ... > so I can ping between the 2 instances without any problem. But as soon I > bring up the OVS bridge inside the vm the ARP requests only visible at the > ens interface but not reaching the OVSbr ... > > please find attached two files which may help for troubleshooting. One are > some network information from inside the Instance that runs the OVS and one > ovs-vsctl info of the OpenStack Host. > > If you need more info/logs please let me know! Thanks for your help! > > BR Mathias. > > > On 2018-01-27 22:44, Volodymyr Litovka wrote: > > Hi Mathias, > > whether you have all corresponding bridges and patches between them > as described in openvswitch_agent.ini using > > integration_bridge > tunnel_bridge > int_peer_patch_port > tun_peer_patch_port > bridge_mappings > > parameters? And make sure, that service "neutron-ovs-cleanup" is in > use during system boot. You can check these bridges and patches using > "ovs-vsctl show" command. > > On 1/27/18 9:00 PM, Mathias Strufe (DFKI) wrote: > > Dear all, > > I'm quite new to openstack and like to install openVSwtich inside > one Instance of our Mitika openstack Lab Enviornment ... > But it seems that ARP packets got lost between the network > interface of the instance and the OVS bridge ... > > With tcpdump on the interface I see the APR packets ... > > tcpdump: verbose output suppressed, use -v or -vv for full protocol > decode > listening on ens6, link-type EN10MB (Ethernet), capture size 262144 > bytes > 18:50:58.080478 ARP, Request who-has 192.168.120.10 tell > 192.168.120.6, length 28 > 18:50:58.125009 ARP, Request who-has 192.168.120.1 tell > 192.168.120.6, length 28 > 18:50:59.077315 ARP, Request who-has 192.168.120.10 tell > 192.168.120.6, length 28 > 18:50:59.121369 ARP, Request who-has 192.168.120.1 tell > 192.168.120.6, length 28 > 18:51:00.077327 ARP, Request who-has 192.168.120.10 tell > 192.168.120.6, length 28 > 18:51:00.121343 ARP, Request who-has 192.168.120.1 tell > 192.168.120.6, length 28 > > but on the OVS bridge nothing arrives ... > > tcpdump: verbose output suppressed, use -v or -vv for full protocol > decode > listening on OVSbr2, link-type EN10MB (Ethernet), capture size > 262144 bytes > > I disabled port_security and removed the security group but nothing > changed > > > +-----------------------+----------------------------------- > ----------------------------------------------------+ > > > | Field | Value > | > > +-----------------------+----------------------------------- > ----------------------------------------------------+ > > > | admin_state_up | True > | > | allowed_address_pairs | > | > | binding:host_id | node11 > | > | binding:profile | {} > | > | binding:vif_details | {"port_filter": true, "ovs_hybrid_plug": > true} | > | binding:vif_type | ovs > | > | binding:vnic_type | normal > | > | created_at | 2018-01-27T16:45:48Z > | > | description | > | > | device_id | 74916967-984c-4617-ae33-b847de73de13 > | > | device_owner | compute:nova > | > | extra_dhcp_opts | > | > | fixed_ips | {"subnet_id": > "525db7ff-2bf2-4c64-b41e-1e41570ec358", "ip_address": > "192.168.120.10"} | > | id | 74b754d6-0000-4c2e-bfd1-87f640154ac9 > | > | mac_address | fa:16:3e:af:90:0c > | > | name | > | > | network_id | 917254cb-9721-4207-99c5-8ead9f95d186 > | > | port_security_enabled | False > | > | project_id | c48457e73b664147a3d2d36d75dcd155 > | > | revision_number | 27 > | > | security_groups | > | > | status | ACTIVE > | > | tenant_id | c48457e73b664147a3d2d36d75dcd155 > | > | updated_at | 2018-01-27T18:54:24Z > | > > +-----------------------+----------------------------------- > ----------------------------------------------------+ > > > > maybe the port_filter causes still the problem? But how to disable > it? > > Any other idea? > > Thanks and BR Mathias. > > _______________________________________________ > Mailing list: > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack [1] > Post to : openstack at lists.openstack.org > Unsubscribe : > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack [1] > > > -- > Volodymyr Litovka > "Vision without Execution is Hallucination." -- Thomas Edison > > > Links: > ------ > [1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > > > > -- > Volodymyr Litovka > "Vision without Execution is Hallucination." -- Thomas Edison > > > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/ > openstack > Post to : openstack at lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/ > openstack > > -- *Benjamín Díaz* Cloud Computing Engineer bdiaz at whitestack.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From mathias.strufe at dfki.de Thu Feb 1 14:11:52 2018 From: mathias.strufe at dfki.de (Mathias Strufe (DFKI)) Date: Thu, 01 Feb 2018 15:11:52 +0100 Subject: [Openstack] OpenVSwitch inside Instance no ARP passthrough In-Reply-To: References: <19e2c014fb8d332bdb3518befce68a37@projects.dfki.uni-kl.de> <11ea9728-9446-2d8c-db3f-f5712e891af4@gmx.com> <9e663e326f138cf141d11964764388f1@projects.dfki.uni-kl.de> <7da0834a-12f7-ce79-db48-87c4058040cd@gmx.com> Message-ID: <1b307fff14ee05267a5dad10216c3d04@projects.dfki.uni-kl.de> Dear Benjamin, Volodymyr, good question ;) ... I like to experiment with some kind of "Firewall NFV" ... but in the first step, I want to build a Router VM between two networks (and later extend it with some flow rules) ... OpenStack, in my case, is more a foundation to build a "test environment" for my "own" application ... please find attached a quick sketch of the current network ... I did this already before with iptables inside the middle instance ... worked quite well ... but know I like to achieve the same with OVS ... I didn't expect that it is so much more difficult ;) ... I'm currently checking Volodymyrs answer ... I think first point is now solved ... I "patched" now OVSbr1 and OVSbr2 inside the VM together (see OVpatch file)... but I think this is important later when I really like to ping from VM1 to VM2 ... but in the moment I only ping from VM1 to the TestNFV ... but the arp requests only reaches ens4 but not OVSbr1 (according to tcpdump)... May it have to do with port security and the (for OpenStack) unknown MAC address of the OVS bridge? Thanks so far ... Mathias. On 2018-02-01 14:28, Benjamin Diaz wrote: > Dear Mathias, > > Could you attach a diagram of your network configuration and of what > you are trying to achieve? > Are you trying to install OVS inside a VM? If so, why? > > Greetings, > Benjamin > > On Thu, Feb 1, 2018 at 8:30 AM, Volodymyr Litovka > wrote: > >> Dear Mathias, >> >> if I correctly understand your configuration, you're using bridges >> inside VM and it configuration looks a bit strange: >> >> 1) you use two different bridges (OVSbr1/192.168.120.x and >> OVSbr2/192.168.110.x) and there is no patch between them so they're >> separate >> 2) while ARP requests for address in OVSbr1 arrives from OVSbr2: >> >>> 18:50:58.080478 ARP, Request who-has 192.168.120.10 tell >> 192.168.120.6, length 28 >>> >>> but on the OVS bridge nothing arrives ... >>> >>> listening on OVSBR2, link-type EN10MB (Ethernet), capture size >>> 262144 bytes >> >> while these bridges are separate, ARP requests and answers will not >> be passed between them. >> >> Regarding your devstack configuration - unfortunately, I don't have >> experience with devstack, so don't know, where it stores configs. In >> Openstack, ml2_conf.ini points to openvswitch in ml2's >> mechanism_drivers parameter, in my case it looks as the following: >> >> [ml2] >> mechanism_drivers = l2population,openvswitch >> >> and rest of openvswitch config described in >> /etc/neutron/plugins/ml2/openvswitch_agent.ini >> >> Second - I see an ambiguity in your br-tun configuration, where >> patch_int is the same as patch-int without corresponding remote peer >> config, probably you should check this issue. >> >> And third is - note that Mitaka is quite old release and probably >> you can give a chance for the latest release of devstack? :-) >> >> On 1/31/18 10:49 PM, Mathias Strufe (DFKI) wrote: >> Dear Volodymyr, all, >> >> thanks for your fast answer ... >> but I'm still facing the same problem, still can't ping the >> instance with configured and up OVS bridge ... may because I'm quite >> new to OpenStack and OpenVswitch and didn't see the problem ;) >> >> My setup is devstack Mitaka in single machine config ... first of >> all I didn't find there the openvswitch_agent.ini anymore, I >> remember in previous version it was in the neutron/plugin folder ... >> >> Is this config now done in the ml2 config file in the [OVS] >> section???? >> >> I'm really wondering ... >> so I can ping between the 2 instances without any problem. But as >> soon I bring up the OVS bridge inside the vm the ARP requests only >> visible at the ens interface but not reaching the OVSbr ... >> >> please find attached two files which may help for troubleshooting. >> One are some network information from inside the Instance that runs >> the OVS and one ovs-vsctl info of the OpenStack Host. >> >> If you need more info/logs please let me know! Thanks for your >> help! >> >> BR Mathias. >> >> On 2018-01-27 22:44, Volodymyr Litovka wrote: >> Hi Mathias, >> >> whether you have all corresponding bridges and patches between >> them >> as described in openvswitch_agent.ini using >> >> integration_bridge >> tunnel_bridge >> int_peer_patch_port >> tun_peer_patch_port >> bridge_mappings >> >> parameters? And make sure, that service "neutron-ovs-cleanup" is >> in >> use during system boot. You can check these bridges and patches >> using >> "ovs-vsctl show" command. >> >> On 1/27/18 9:00 PM, Mathias Strufe (DFKI) wrote: >> >> Dear all, >> >> I'm quite new to openstack and like to install openVSwtich inside >> one Instance of our Mitika openstack Lab Enviornment ... >> But it seems that ARP packets got lost between the network >> interface of the instance and the OVS bridge ... >> >> With tcpdump on the interface I see the APR packets ... >> >> tcpdump: verbose output suppressed, use -v or -vv for full protocol >> >> decode >> listening on ens6, link-type EN10MB (Ethernet), capture size 262144 >> >> bytes >> 18:50:58.080478 ARP, Request who-has 192.168.120.10 tell >> 192.168.120.6, length 28 >> 18:50:58.125009 ARP, Request who-has 192.168.120.1 tell >> 192.168.120.6, length 28 >> 18:50:59.077315 ARP, Request who-has 192.168.120.10 tell >> 192.168.120.6, length 28 >> 18:50:59.121369 ARP, Request who-has 192.168.120.1 tell >> 192.168.120.6, length 28 >> 18:51:00.077327 ARP, Request who-has 192.168.120.10 tell >> 192.168.120.6, length 28 >> 18:51:00.121343 ARP, Request who-has 192.168.120.1 tell >> 192.168.120.6, length 28 >> >> but on the OVS bridge nothing arrives ... >> >> tcpdump: verbose output suppressed, use -v or -vv for full protocol >> >> decode >> listening on OVSbr2, link-type EN10MB (Ethernet), capture size >> 262144 bytes >> >> I disabled port_security and removed the security group but nothing >> >> changed >> >> > +-----------------------+---------------------------------------------------------------------------------------+ >> >> >> | Field | Value >> | >> >> > +-----------------------+---------------------------------------------------------------------------------------+ >> >> >> | admin_state_up | True >> | >> | allowed_address_pairs | >> | >> | binding:host_id | node11 >> | >> | binding:profile | {} >> | >> | binding:vif_details | {"port_filter": true, "ovs_hybrid_plug": >> true} | >> | binding:vif_type | ovs >> | >> | binding:vnic_type | normal >> | >> | created_at | 2018-01-27T16:45:48Z >> | >> | description | >> | >> | device_id | 74916967-984c-4617-ae33-b847de73de13 >> | >> | device_owner | compute:nova >> | >> | extra_dhcp_opts | >> | >> | fixed_ips | {"subnet_id": >> "525db7ff-2bf2-4c64-b41e-1e41570ec358", "ip_address": >> "192.168.120.10"} | >> | id | 74b754d6-0000-4c2e-bfd1-87f640154ac9 >> | >> | mac_address | fa:16:3e:af:90:0c >> | >> | name | >> | >> | network_id | 917254cb-9721-4207-99c5-8ead9f95d186 >> | >> | port_security_enabled | False >> | >> | project_id | c48457e73b664147a3d2d36d75dcd155 >> | >> | revision_number | 27 >> | >> | security_groups | >> | >> | status | ACTIVE >> | >> | tenant_id | c48457e73b664147a3d2d36d75dcd155 >> | >> | updated_at | 2018-01-27T18:54:24Z >> | >> >> > +-----------------------+---------------------------------------------------------------------------------------+ >> >> >> maybe the port_filter causes still the problem? But how to disable >> it? >> >> Any other idea? >> >> Thanks and BR Mathias. >> >> _______________________________________________ >> Mailing list: >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack [1] >> [1] >> Post to : openstack at lists.openstack.org >> Unsubscribe : >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack [1] >> [1] >> >> -- >> Volodymyr Litovka >> "Vision without Execution is Hallucination." -- Thomas Edison >> >> Links: >> ------ >> [1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >> [1] > > -- > Volodymyr Litovka > "Vision without Execution is Hallucination." -- Thomas Edison > > _______________________________________________ > Mailing list: > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack [1] > Post to : openstack at lists.openstack.org > Unsubscribe : > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack [1] > > -- > > BENJAMÍN DÍAZ > Cloud Computing Engineer > > bdiaz at whitestack.com > > Links: > ------ > [1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack -- Vielen Dank und Gruß Mathias. Many Thanks and kind regards, Mathias. -- Dipl.-Ing. (FH) Mathias Strufe Wissenschaftlicher Mitarbeiter / Researcher Intelligente Netze / Intelligent Networks Phone: +49 (0) 631 205 75 - 1826 Fax: +49 (0) 631 205 75 – 4400 E-Mail: Mathias.Strufe at dfki.de WWW: http://www.dfki.de/web/forschung/in WWW: https://selfnet-5g.eu/ -------------------------------------------------------------- Deutsches Forschungszentrum fuer Kuenstliche Intelligenz GmbH Trippstadter Strasse 122 D-67663 Kaiserslautern, Germany Geschaeftsfuehrung: Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) Dr. Walter Olthoff Vorsitzender des Aufsichtsrats: Prof. Dr. h.c. Hans A. Aukes Amtsgericht Kaiserslautern, HRB 2313 VAT-ID: DE 148 646 973 -------------------------------------------------------------- -------------- next part -------------- A non-text attachment was scrubbed... Name: Network.pdf Type: application/pdf Size: 45097 bytes Desc: not available URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: OVSpatch URL: From bdiaz at whitestack.com Thu Feb 1 14:41:45 2018 From: bdiaz at whitestack.com (Benjamin Diaz) Date: Thu, 1 Feb 2018 11:41:45 -0300 Subject: [Openstack] OpenVSwitch inside Instance no ARP passthrough In-Reply-To: <1b307fff14ee05267a5dad10216c3d04@projects.dfki.uni-kl.de> References: <19e2c014fb8d332bdb3518befce68a37@projects.dfki.uni-kl.de> <11ea9728-9446-2d8c-db3f-f5712e891af4@gmx.com> <9e663e326f138cf141d11964764388f1@projects.dfki.uni-kl.de> <7da0834a-12f7-ce79-db48-87c4058040cd@gmx.com> <1b307fff14ee05267a5dad10216c3d04@projects.dfki.uni-kl.de> Message-ID: Mathias, Just to clarify: Which interface in which VM are you pinging from, and which interface in which VM are you pinging to? Also, if i recall correctly, in Mitaka, besides disabling port security, you had to disable ARP spoofing prevention for a scenario like this to work. In ml2_conf.ini: [AGENT] prevent_arp_spoofing = False I would also sincerely recommend though that you update your dev environment to use the latest version of Openstack (Pike). Greetings, Benjamin On Thu, Feb 1, 2018 at 11:11 AM, Mathias Strufe (DFKI) < mathias.strufe at dfki.de> wrote: > Dear Benjamin, Volodymyr, > > good question ;) ... I like to experiment with some kind of "Firewall NFV" > ... but in the first step, I want to build a Router VM between two networks > (and later extend it with some flow rules) ... OpenStack, in my case, is > more a foundation to build a "test environment" for my "own" application > ... please find attached a quick sketch of the current network ... > I did this already before with iptables inside the middle instance ... > worked quite well ... but know I like to achieve the same with OVS ... > I didn't expect that it is so much more difficult ;) ... > > I'm currently checking Volodymyrs answer ... I think first point is now > solved ... I "patched" now OVSbr1 and OVSbr2 inside the VM together (see > OVpatch file)... but I think this is important later when I really like to > ping from VM1 to VM2 ... but in the moment I only ping from VM1 to the > TestNFV ... but the arp requests only reaches ens4 but not OVSbr1 > (according to tcpdump)... > > May it have to do with port security and the (for OpenStack) unknown MAC > address of the OVS bridge? > > Thanks so far ... > > Mathias. > > > > > > On 2018-02-01 14:28, Benjamin Diaz wrote: > >> Dear Mathias, >> >> Could you attach a diagram of your network configuration and of what >> you are trying to achieve? >> Are you trying to install OVS inside a VM? If so, why? >> >> Greetings, >> Benjamin >> >> On Thu, Feb 1, 2018 at 8:30 AM, Volodymyr Litovka >> wrote: >> >> Dear Mathias, >>> >>> if I correctly understand your configuration, you're using bridges >>> inside VM and it configuration looks a bit strange: >>> >>> 1) you use two different bridges (OVSbr1/192.168.120.x and >>> OVSbr2/192.168.110.x) and there is no patch between them so they're >>> separate >>> 2) while ARP requests for address in OVSbr1 arrives from OVSbr2: >>> >>> 18:50:58.080478 ARP, Request who-has 192.168.120.10 tell >>>> >>> 192.168.120.6, length 28 >>> >>>> >>>> but on the OVS bridge nothing arrives ... >>>> >>>> listening on OVSBR2, link-type EN10MB (Ethernet), capture size >>>> 262144 bytes >>>> >>> >>> while these bridges are separate, ARP requests and answers will not >>> be passed between them. >>> >>> Regarding your devstack configuration - unfortunately, I don't have >>> experience with devstack, so don't know, where it stores configs. In >>> Openstack, ml2_conf.ini points to openvswitch in ml2's >>> mechanism_drivers parameter, in my case it looks as the following: >>> >>> [ml2] >>> mechanism_drivers = l2population,openvswitch >>> >>> and rest of openvswitch config described in >>> /etc/neutron/plugins/ml2/openvswitch_agent.ini >>> >>> Second - I see an ambiguity in your br-tun configuration, where >>> patch_int is the same as patch-int without corresponding remote peer >>> config, probably you should check this issue. >>> >>> And third is - note that Mitaka is quite old release and probably >>> you can give a chance for the latest release of devstack? :-) >>> >>> On 1/31/18 10:49 PM, Mathias Strufe (DFKI) wrote: >>> Dear Volodymyr, all, >>> >>> thanks for your fast answer ... >>> but I'm still facing the same problem, still can't ping the >>> instance with configured and up OVS bridge ... may because I'm quite >>> new to OpenStack and OpenVswitch and didn't see the problem ;) >>> >>> My setup is devstack Mitaka in single machine config ... first of >>> all I didn't find there the openvswitch_agent.ini anymore, I >>> remember in previous version it was in the neutron/plugin folder ... >>> >>> Is this config now done in the ml2 config file in the [OVS] >>> section???? >>> >>> I'm really wondering ... >>> so I can ping between the 2 instances without any problem. But as >>> soon I bring up the OVS bridge inside the vm the ARP requests only >>> visible at the ens interface but not reaching the OVSbr ... >>> >>> please find attached two files which may help for troubleshooting. >>> One are some network information from inside the Instance that runs >>> the OVS and one ovs-vsctl info of the OpenStack Host. >>> >>> If you need more info/logs please let me know! Thanks for your >>> help! >>> >>> BR Mathias. >>> >>> On 2018-01-27 22:44, Volodymyr Litovka wrote: >>> Hi Mathias, >>> >>> whether you have all corresponding bridges and patches between >>> them >>> as described in openvswitch_agent.ini using >>> >>> integration_bridge >>> tunnel_bridge >>> int_peer_patch_port >>> tun_peer_patch_port >>> bridge_mappings >>> >>> parameters? And make sure, that service "neutron-ovs-cleanup" is >>> in >>> use during system boot. You can check these bridges and patches >>> using >>> "ovs-vsctl show" command. >>> >>> On 1/27/18 9:00 PM, Mathias Strufe (DFKI) wrote: >>> >>> Dear all, >>> >>> I'm quite new to openstack and like to install openVSwtich inside >>> one Instance of our Mitika openstack Lab Enviornment ... >>> But it seems that ARP packets got lost between the network >>> interface of the instance and the OVS bridge ... >>> >>> With tcpdump on the interface I see the APR packets ... >>> >>> tcpdump: verbose output suppressed, use -v or -vv for full protocol >>> >>> decode >>> listening on ens6, link-type EN10MB (Ethernet), capture size 262144 >>> >>> bytes >>> 18:50:58.080478 ARP, Request who-has 192.168.120.10 tell >>> 192.168.120.6, length 28 >>> 18:50:58.125009 ARP, Request who-has 192.168.120.1 tell >>> 192.168.120.6, length 28 >>> 18:50:59.077315 ARP, Request who-has 192.168.120.10 tell >>> 192.168.120.6, length 28 >>> 18:50:59.121369 ARP, Request who-has 192.168.120.1 tell >>> 192.168.120.6, length 28 >>> 18:51:00.077327 ARP, Request who-has 192.168.120.10 tell >>> 192.168.120.6, length 28 >>> 18:51:00.121343 ARP, Request who-has 192.168.120.1 tell >>> 192.168.120.6, length 28 >>> >>> but on the OVS bridge nothing arrives ... >>> >>> tcpdump: verbose output suppressed, use -v or -vv for full protocol >>> >>> decode >>> listening on OVSbr2, link-type EN10MB (Ethernet), capture size >>> 262144 bytes >>> >>> I disabled port_security and removed the security group but nothing >>> >>> changed >>> >>> >>> +-----------------------+----------------------------------- >> ----------------------------------------------------+ >> >>> >>> >>> | Field | Value >>> | >>> >>> >>> +-----------------------+----------------------------------- >> ----------------------------------------------------+ >> >>> >>> >>> | admin_state_up | True >>> | >>> | allowed_address_pairs | >>> | >>> | binding:host_id | node11 >>> | >>> | binding:profile | {} >>> | >>> | binding:vif_details | {"port_filter": true, "ovs_hybrid_plug": >>> true} | >>> | binding:vif_type | ovs >>> | >>> | binding:vnic_type | normal >>> | >>> | created_at | 2018-01-27T16:45:48Z >>> | >>> | description | >>> | >>> | device_id | 74916967-984c-4617-ae33-b847de73de13 >>> | >>> | device_owner | compute:nova >>> | >>> | extra_dhcp_opts | >>> | >>> | fixed_ips | {"subnet_id": >>> "525db7ff-2bf2-4c64-b41e-1e41570ec358", "ip_address": >>> "192.168.120.10"} | >>> | id | 74b754d6-0000-4c2e-bfd1-87f640154ac9 >>> | >>> | mac_address | fa:16:3e:af:90:0c >>> | >>> | name | >>> | >>> | network_id | 917254cb-9721-4207-99c5-8ead9f95d186 >>> | >>> | port_security_enabled | False >>> | >>> | project_id | c48457e73b664147a3d2d36d75dcd155 >>> | >>> | revision_number | 27 >>> | >>> | security_groups | >>> | >>> | status | ACTIVE >>> | >>> | tenant_id | c48457e73b664147a3d2d36d75dcd155 >>> | >>> | updated_at | 2018-01-27T18:54:24Z >>> | >>> >>> >>> +-----------------------+----------------------------------- >> ----------------------------------------------------+ >> >>> >>> >>> maybe the port_filter causes still the problem? But how to disable >>> it? >>> >>> Any other idea? >>> >>> Thanks and BR Mathias. >>> >>> _______________________________________________ >>> Mailing list: >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack [1] >>> [1] >>> Post to : openstack at lists.openstack.org >>> Unsubscribe : >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack [1] >>> [1] >>> >>> -- >>> Volodymyr Litovka >>> "Vision without Execution is Hallucination." -- Thomas Edison >>> >>> Links: >>> ------ >>> [1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >>> [1] >>> >> >> -- >> Volodymyr Litovka >> "Vision without Execution is Hallucination." -- Thomas Edison >> >> _______________________________________________ >> Mailing list: >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack [1] >> Post to : openstack at lists.openstack.org >> Unsubscribe : >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack [1] >> >> -- >> >> BENJAMÍN DÍAZ >> Cloud Computing Engineer >> >> bdiaz at whitestack.com >> >> Links: >> ------ >> [1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >> > > -- > Vielen Dank und Gruß Mathias. > Many Thanks and kind regards, Mathias. > > -- > Dipl.-Ing. (FH) Mathias Strufe > Wissenschaftlicher Mitarbeiter / Researcher > Intelligente Netze / Intelligent Networks > > Phone: +49 (0) 631 205 75 - 1826 > Fax: +49 (0) 631 205 75 – 4400 > > E-Mail: Mathias.Strufe at dfki.de > WWW: http://www.dfki.de/web/forschung/in > > WWW: https://selfnet-5g.eu/ > > -------------------------------------------------------------- > Deutsches Forschungszentrum fuer Kuenstliche Intelligenz GmbH > Trippstadter Strasse 122 > D-67663 Kaiserslautern, Germany > > Geschaeftsfuehrung: > Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) Dr. Walter > Olthoff > > Vorsitzender des Aufsichtsrats: > Prof. Dr. h.c. Hans A. Aukes > > Amtsgericht Kaiserslautern, HRB 2313 > VAT-ID: DE 148 646 973 > -------------------------------------------------------------- > > -- *Benjamín Díaz* Cloud Computing Engineer bdiaz at whitestack.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From satish.txt at gmail.com Thu Feb 1 14:56:01 2018 From: satish.txt at gmail.com (Satish Patel) Date: Thu, 1 Feb 2018 09:56:01 -0500 Subject: [Openstack] tripleO Error No valid host was found In-Reply-To: <35E1C10A-6220-4689-8A95-D44EA437ADDE@gmail.com> References: <20180131234803.GD23143@thor.bakeyournoodle.com> <1F20886F-70CE-4F60-BE1B-9BF0EE57A06B@Italy1.com> <35E1C10A-6220-4689-8A95-D44EA437ADDE@gmail.com> Message-ID: Hi Lars, I am following this article https://www.linuxtechi.com/deploy-tripleo-overcloud-controller-computes-centos-7/ (undercloud) [root at tripleo ~]# openstack flavor list +--------------------------------------+---------------+------+------+-----------+-------+-----------+ | ID | Name | RAM | Disk | Ephemeral | VCPUs | Is Public | +--------------------------------------+---------------+------+------+-----------+-------+-----------+ | 0c8a1969-385c-4f93-b43f-7d68f22039f3 | compute | 4096 | 40 | 0 | 1 | True | | 26481980-ecc8-4262-91b1-83afac58f62f | control | 4096 | 40 | 0 | 1 | True | | 97a4b189-c743-4ca8-aaae-b33aebeb4470 | ceph-storage | 4096 | 40 | 0 | 1 | True | | 9bedb045-4a5a-423b-92d6-813cf3bb9b1b | baremetal | 4096 | 40 | 0 | 1 | True | | c3d26a13-430a-4db3-9f84-bf6ca6675629 | swift-storage | 4096 | 40 | 0 | 1 | True | | cb5b3d9d-3b47-428c-a455-17650185b85f | block-storage | 4096 | 40 | 0 | 1 | True | +--------------------------------------+---------------+------+------+-----------+-------+-----------+ (undercloud) [root at tripleo ~]# I have created two VM on VMware Workstation with 2 CPU and 2GB memory, but i didn't tell undercloud what type of instance i have, this is what i have in my instance.json file, How does undercloud will know what flavor i am going to deploy? [stack at tripleo ~]$ cat instance/instance.json { "nodes":[ { "mac":[ "00:0C:29:FE:72:97" ], "arch":"x86_64", "name": "overcloud-controller", "pm_type":"fake_pxe" }, { "mac":[ "00:0C:29:93:36:B2" ], "arch":"x86_64", "name": "overcloud-compute1", "pm_type":"fake_pxe" } ] } On Thu, Feb 1, 2018 at 2:40 AM, Lars Brune wrote: > Hi Satish, > > could you provide some more information about the hosts and flavors? > I’ve made the experience that it sometimes helps to create the flavors with a bit of a margin to the „real“ hardware to avoid nova complaining about this. > >> Am 01.02.2018 um 08:16 schrieb Remo Mattei : >> >> What are you running? Pike? Ocata? >> >> Pike option have changed so the flavor could be an issue now. You should look into that and see. >> >> Inviato da iPhone >> >>> Il giorno 01 feb 2018, alle ore 00:48, Tony Breeds ha scritto: >>> >>>> On Wed, Jan 31, 2018 at 06:10:11PM -0500, Satish Patel wrote: >>>> I am playing with tripleO and getting following error when deploying >>>> overcloud, I doing all this on VMware Workstation with fake_pxe >>>> driver, I did enable drive in ironic too. >>>> >>>> What could be wrong here? >>> >>> There's lots that could be wrong sadly. Testign under VMware is minimal >>> to none. There are some good tips at: >>> https://docs.openstack.org/tripleo-docs/latest/install/troubleshooting/troubleshooting.html >>> Specifically: >>> https://docs.openstack.org/tripleo-docs/latest/install/troubleshooting/troubleshooting-overcloud.html#no-valid-host-found-error >>> >>> Yours Tony. >>> _______________________________________________ >>> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >>> Post to : openstack at lists.openstack.org >>> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >> >> >> _______________________________________________ >> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >> Post to : openstack at lists.openstack.org >> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > From fawaz.moh.ibraheem at gmail.com Thu Feb 1 15:15:51 2018 From: fawaz.moh.ibraheem at gmail.com (Fawaz Mohammed) Date: Thu, 1 Feb 2018 19:15:51 +0400 Subject: [Openstack] Openstack neutron with ASR1k In-Reply-To: References: Message-ID: Hi Satish, TripleO (Supported on CentOS and RHEL) and Juju (Supported on Ubuntu) [1] are the most used OpenStack deployment tools. >From the deployment perspectives, you need to know your cloud setup and the required plugins to decide which deployment you will choose. As each tool has a pre-integrated plugging out of the box. [1] https://docs.openstack.org/charm-deployment-guide/ latest/install-juju.html --- Regards, Fawaz Mohammed On Thu, Feb 1, 2018 at 5:04 PM, Satish Patel wrote: > Hi Fawaz, > > Great so you think I'm doing right thing to use ASR1k for my L3 function? > > I'm using DVR on my lab with 3 node cluster and it's kinda complicated to > troubleshoot sometime. > > We are planning to build 20 node cluster for start and started playing > with tripleo to deploy and having lots of issue with tripleo too :( I was > also thinking to use mirantis but they stopped development. > > Based on your experience what most of folks use to deploy middle size > cloud? What about you? > > > > > > > > Sent from my iPhone > > On Jan 31, 2018, at 11:48 PM, Fawaz Mohammed > wrote: > > http://www.jimmdenton.com/networking-cisco-asr-install/ can beet your > requirements. I have another plugin in production from different vendor for > the same purpose. and it works perfect. > > Regarding your question about the license, usually, there is no license > for such plugins. > > I've no production experience with DVR, and I don't recommend it in medium > to large environment. > > On Thu, Feb 1, 2018 at 2:58 AM, Satish Patel wrote: > >> What about this? http://www.jimmdenton.com/networking-cisco-asr-part-two/ >> >> ML2 does use ASR too, >> >> Just curious what people mostly use in production? are they use DVR or >> some kind of hardware for L3? >> >> >> On Wed, Jan 31, 2018 at 4:02 PM, Fawaz Mohammed >> wrote: >> > Hi Santish, >> > >> > In my knowlege, Cisco has ml2 driver for Nexus only. >> > >> > So, if you have requirements for dynamic L3 provisioning / >> configuration, >> > it's better to go with SDN solution. >> > >> > On Jan 31, 2018 11:39 PM, "Satish Patel" wrote: >> >> >> >> So no one using ASR 1001 for Openstack? >> >> >> >> On Tue, Jan 30, 2018 at 11:21 PM, Satish Patel >> >> wrote: >> >> > Folks, >> >> > >> >> > We are planning to deploy production style private cloud and >> gathering >> >> > information about what we should use and why and i came across with >> >> > couple of document related network node criticality and performance >> >> > issue and many folks suggesting following >> >> > >> >> > 1. DVR (it seem complicated after reading, also need lots of public >> IP) >> >> > 2. Use ASR1k centralized router to use for L3 function (any idea >> what >> >> > model should be good? or do we need any licensing to integrate with >> >> > openstack?) >> >> > >> >> > Would like to get some input from folks who already using openstack >> in >> >> > production and would like to know what kind of deployment they pick >> >> > for network/neutron performance? >> >> >> >> _______________________________________________ >> >> Mailing list: >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >> >> Post to : openstack at lists.openstack.org >> >> Unsubscribe : >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From brune.lars at gmail.com Thu Feb 1 15:16:40 2018 From: brune.lars at gmail.com (Lars Brune) Date: Thu, 1 Feb 2018 16:16:40 +0100 Subject: [Openstack] tripleO Error No valid host was found In-Reply-To: References: <20180131234803.GD23143@thor.bakeyournoodle.com> <1F20886F-70CE-4F60-BE1B-9BF0EE57A06B@Italy1.com> <35E1C10A-6220-4689-8A95-D44EA437ADDE@gmail.com> Message-ID: <491E9859-5367-4198-9D23-5B50185DAEE2@gmail.com> Hi Satish, so the problem seems to be the following: You are telling the under cloud to use these flavors with: (undercloud) [stack at tripleo instance]$ openstack overcloud deploy --templates --control-scale 1 --compute-scale 1 --control-flavor control --compute-flavor compute Your flavor for control, compute etc. says it needs at leats 4GB of RAM, 40GB of Disk etc. but you only have instances providing 2GB of RAM. This is way nova is complaining about not finding a valid host. The scheduler simply can not fulfil your request for a host with those resources available. To get around this you could either create VMs with a bit more ram etc, or you need to change the flavors - but i’m afraid 2GB of ram is to little. If you have the resources try to use at least 8GB per control or compute node if not you could try with 4GB and swap but i’m not sure if this works well enough. > Am 01.02.2018 um 15:56 schrieb Satish Patel : > > Hi Lars, > > I am following this article > https://www.linuxtechi.com/deploy-tripleo-overcloud-controller-computes-centos-7/ > > (undercloud) [root at tripleo ~]# openstack flavor list > +--------------------------------------+---------------+------+------+-----------+-------+-----------+ > | ID | Name | RAM | Disk | > Ephemeral | VCPUs | Is Public | > +--------------------------------------+---------------+------+------+-----------+-------+-----------+ > | 0c8a1969-385c-4f93-b43f-7d68f22039f3 | compute | 4096 | 40 | > 0 | 1 | True | > | 26481980-ecc8-4262-91b1-83afac58f62f | control | 4096 | 40 | > 0 | 1 | True | > | 97a4b189-c743-4ca8-aaae-b33aebeb4470 | ceph-storage | 4096 | 40 | > 0 | 1 | True | > | 9bedb045-4a5a-423b-92d6-813cf3bb9b1b | baremetal | 4096 | 40 | > 0 | 1 | True | > | c3d26a13-430a-4db3-9f84-bf6ca6675629 | swift-storage | 4096 | 40 | > 0 | 1 | True | > | cb5b3d9d-3b47-428c-a455-17650185b85f | block-storage | 4096 | 40 | > 0 | 1 | True | > +--------------------------------------+---------------+------+------+-----------+-------+-----------+ > (undercloud) [root at tripleo ~]# > > > I have created two VM on VMware Workstation with 2 CPU and 2GB memory, > but i didn't tell undercloud what type of instance i have, this is > what i have in my instance.json file, How does undercloud will know > what flavor i am going to deploy? > > > [stack at tripleo ~]$ cat instance/instance.json > { > "nodes":[ > { > "mac":[ > "00:0C:29:FE:72:97" > ], > "arch":"x86_64", > "name": "overcloud-controller", > "pm_type":"fake_pxe" > }, > { > "mac":[ > "00:0C:29:93:36:B2" > ], > "arch":"x86_64", > "name": "overcloud-compute1", > "pm_type":"fake_pxe" > } > ] > } > > On Thu, Feb 1, 2018 at 2:40 AM, Lars Brune wrote: >> Hi Satish, >> >> could you provide some more information about the hosts and flavors? >> I’ve made the experience that it sometimes helps to create the flavors with a bit of a margin to the „real“ hardware to avoid nova complaining about this. >> >>> Am 01.02.2018 um 08:16 schrieb Remo Mattei : >>> >>> What are you running? Pike? Ocata? >>> >>> Pike option have changed so the flavor could be an issue now. You should look into that and see. >>> >>> Inviato da iPhone >>> >>>> Il giorno 01 feb 2018, alle ore 00:48, Tony Breeds ha scritto: >>>> >>>>> On Wed, Jan 31, 2018 at 06:10:11PM -0500, Satish Patel wrote: >>>>> I am playing with tripleO and getting following error when deploying >>>>> overcloud, I doing all this on VMware Workstation with fake_pxe >>>>> driver, I did enable drive in ironic too. >>>>> >>>>> What could be wrong here? >>>> >>>> There's lots that could be wrong sadly. Testign under VMware is minimal >>>> to none. There are some good tips at: >>>> https://docs.openstack.org/tripleo-docs/latest/install/troubleshooting/troubleshooting.html >>>> Specifically: >>>> https://docs.openstack.org/tripleo-docs/latest/install/troubleshooting/troubleshooting-overcloud.html#no-valid-host-found-error >>>> >>>> Yours Tony. >>>> _______________________________________________ >>>> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >>>> Post to : openstack at lists.openstack.org >>>> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >>> >>> >>> _______________________________________________ >>> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >>> Post to : openstack at lists.openstack.org >>> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From satish.txt at gmail.com Thu Feb 1 15:22:39 2018 From: satish.txt at gmail.com (Satish Patel) Date: Thu, 1 Feb 2018 10:22:39 -0500 Subject: [Openstack] tripleO Error No valid host was found In-Reply-To: <491E9859-5367-4198-9D23-5B50185DAEE2@gmail.com> References: <20180131234803.GD23143@thor.bakeyournoodle.com> <1F20886F-70CE-4F60-BE1B-9BF0EE57A06B@Italy1.com> <35E1C10A-6220-4689-8A95-D44EA437ADDE@gmail.com> <491E9859-5367-4198-9D23-5B50185DAEE2@gmail.com> Message-ID: You have a point +1 But i am confused where i didn't tell undercloud what kind of VM i am going to use for overcloud then how does undercloud find out i am using 2 vCPU and 2GB memory vm? On VMware i don't have any ipmitool etc where undercloud inspect my hardware. That is what i am trying to understand how undercloud going to inspect my hardware in VMware Workstation? On Thu, Feb 1, 2018 at 10:16 AM, Lars Brune wrote: > Hi Satish, > > so the problem seems to be the following: > > You are telling the under cloud to use these flavors with: > > (undercloud) [stack at tripleo instance]$ openstack overcloud deploy > --templates --control-scale 1 --compute-scale 1 --control-flavor > control --compute-flavor compute > > Your flavor for control, compute etc. says it needs at leats 4GB of RAM, > 40GB of Disk etc. but you only have instances providing 2GB of RAM. > This is way nova is complaining about not finding a valid host. > The scheduler simply can not fulfil your request for a host with those > resources available. > > To get around this you could either create VMs with a bit more ram etc, > or you need to change the flavors - but i’m afraid 2GB of ram is to little. > If you have the resources try to use at least 8GB per control or compute > node if not you could try with 4GB and swap but i’m not sure if this works > well enough. > > > Am 01.02.2018 um 15:56 schrieb Satish Patel : > > Hi Lars, > > I am following this article > https://www.linuxtechi.com/deploy-tripleo-overcloud-controller-computes-centos-7/ > > (undercloud) [root at tripleo ~]# openstack flavor list > +--------------------------------------+---------------+------+------+-----------+-------+-----------+ > | ID | Name | RAM | Disk | > Ephemeral | VCPUs | Is Public | > +--------------------------------------+---------------+------+------+-----------+-------+-----------+ > | 0c8a1969-385c-4f93-b43f-7d68f22039f3 | compute | 4096 | 40 | > 0 | 1 | True | > | 26481980-ecc8-4262-91b1-83afac58f62f | control | 4096 | 40 | > 0 | 1 | True | > | 97a4b189-c743-4ca8-aaae-b33aebeb4470 | ceph-storage | 4096 | 40 | > 0 | 1 | True | > | 9bedb045-4a5a-423b-92d6-813cf3bb9b1b | baremetal | 4096 | 40 | > 0 | 1 | True | > | c3d26a13-430a-4db3-9f84-bf6ca6675629 | swift-storage | 4096 | 40 | > 0 | 1 | True | > | cb5b3d9d-3b47-428c-a455-17650185b85f | block-storage | 4096 | 40 | > 0 | 1 | True | > +--------------------------------------+---------------+------+------+-----------+-------+-----------+ > (undercloud) [root at tripleo ~]# > > > I have created two VM on VMware Workstation with 2 CPU and 2GB memory, > but i didn't tell undercloud what type of instance i have, this is > what i have in my instance.json file, How does undercloud will know > what flavor i am going to deploy? > > > [stack at tripleo ~]$ cat instance/instance.json > { > "nodes":[ > { > "mac":[ > "00:0C:29:FE:72:97" > ], > "arch":"x86_64", > "name": "overcloud-controller", > "pm_type":"fake_pxe" > }, > { > "mac":[ > "00:0C:29:93:36:B2" > ], > "arch":"x86_64", > "name": "overcloud-compute1", > "pm_type":"fake_pxe" > } > ] > } > > On Thu, Feb 1, 2018 at 2:40 AM, Lars Brune wrote: > > Hi Satish, > > could you provide some more information about the hosts and flavors? > I’ve made the experience that it sometimes helps to create the flavors with > a bit of a margin to the „real“ hardware to avoid nova complaining about > this. > > Am 01.02.2018 um 08:16 schrieb Remo Mattei : > > What are you running? Pike? Ocata? > > Pike option have changed so the flavor could be an issue now. You should > look into that and see. > > Inviato da iPhone > > Il giorno 01 feb 2018, alle ore 00:48, Tony Breeds > ha scritto: > > On Wed, Jan 31, 2018 at 06:10:11PM -0500, Satish Patel wrote: > I am playing with tripleO and getting following error when deploying > overcloud, I doing all this on VMware Workstation with fake_pxe > driver, I did enable drive in ironic too. > > What could be wrong here? > > > There's lots that could be wrong sadly. Testign under VMware is minimal > to none. There are some good tips at: > > https://docs.openstack.org/tripleo-docs/latest/install/troubleshooting/troubleshooting.html > Specifically: > > https://docs.openstack.org/tripleo-docs/latest/install/troubleshooting/troubleshooting-overcloud.html#no-valid-host-found-error > > Yours Tony. > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Post to : openstack at lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > > > > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Post to : openstack at lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > > > From satish.txt at gmail.com Thu Feb 1 15:32:28 2018 From: satish.txt at gmail.com (Satish Patel) Date: Thu, 1 Feb 2018 10:32:28 -0500 Subject: [Openstack] Openstack neutron with ASR1k In-Reply-To: References: Message-ID: We are CentOS7 shop that is why i have started playing with TripleO on my VMware environment to get familiar. what do you suggest for 20 node cluster? we don't have any storage requirement all we need CPU and Memory to run our application, currently all application running on VMware and i want to migrate them over Openstack and do some rapid deployment. This is what i am planning (let me if i am wrong) 2 controller node (in HA) 2 Cisco ASR1k (L3 function in HA) 20 compute node with 10G nic and lots of memory and many cpu cores. Anything else which i am missing? On Thu, Feb 1, 2018 at 10:15 AM, Fawaz Mohammed wrote: > Hi Satish, > > TripleO (Supported on CentOS and RHEL) and Juju (Supported on Ubuntu) [1] > are the most used OpenStack deployment tools. > > From the deployment perspectives, you need to know your cloud setup and the > required plugins to decide which deployment you will choose. As each tool > has a pre-integrated plugging out of the box. > > > [1] > https://docs.openstack.org/charm-deployment-guide/latest/install-juju.html > --- > Regards, > Fawaz Mohammed > > On Thu, Feb 1, 2018 at 5:04 PM, Satish Patel wrote: >> >> Hi Fawaz, >> >> Great so you think I'm doing right thing to use ASR1k for my L3 function? >> >> I'm using DVR on my lab with 3 node cluster and it's kinda complicated to >> troubleshoot sometime. >> >> We are planning to build 20 node cluster for start and started playing >> with tripleo to deploy and having lots of issue with tripleo too :( I was >> also thinking to use mirantis but they stopped development. >> >> Based on your experience what most of folks use to deploy middle size >> cloud? What about you? >> >> >> >> >> >> >> >> Sent from my iPhone >> >> On Jan 31, 2018, at 11:48 PM, Fawaz Mohammed >> wrote: >> >> http://www.jimmdenton.com/networking-cisco-asr-install/ can beet your >> requirements. I have another plugin in production from different vendor for >> the same purpose. and it works perfect. >> >> Regarding your question about the license, usually, there is no license >> for such plugins. >> >> I've no production experience with DVR, and I don't recommend it in medium >> to large environment. >> >> On Thu, Feb 1, 2018 at 2:58 AM, Satish Patel wrote: >>> >>> What about this? http://www.jimmdenton.com/networking-cisco-asr-part-two/ >>> >>> ML2 does use ASR too, >>> >>> Just curious what people mostly use in production? are they use DVR or >>> some kind of hardware for L3? >>> >>> >>> On Wed, Jan 31, 2018 at 4:02 PM, Fawaz Mohammed >>> wrote: >>> > Hi Santish, >>> > >>> > In my knowlege, Cisco has ml2 driver for Nexus only. >>> > >>> > So, if you have requirements for dynamic L3 provisioning / >>> > configuration, >>> > it's better to go with SDN solution. >>> > >>> > On Jan 31, 2018 11:39 PM, "Satish Patel" wrote: >>> >> >>> >> So no one using ASR 1001 for Openstack? >>> >> >>> >> On Tue, Jan 30, 2018 at 11:21 PM, Satish Patel >>> >> wrote: >>> >> > Folks, >>> >> > >>> >> > We are planning to deploy production style private cloud and >>> >> > gathering >>> >> > information about what we should use and why and i came across with >>> >> > couple of document related network node criticality and performance >>> >> > issue and many folks suggesting following >>> >> > >>> >> > 1. DVR (it seem complicated after reading, also need lots of public >>> >> > IP) >>> >> > 2. Use ASR1k centralized router to use for L3 function (any idea >>> >> > what >>> >> > model should be good? or do we need any licensing to integrate with >>> >> > openstack?) >>> >> > >>> >> > Would like to get some input from folks who already using openstack >>> >> > in >>> >> > production and would like to know what kind of deployment they pick >>> >> > for network/neutron performance? >>> >> >>> >> _______________________________________________ >>> >> Mailing list: >>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >>> >> Post to : openstack at lists.openstack.org >>> >> Unsubscribe : >>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >> >> > From fawaz.moh.ibraheem at gmail.com Thu Feb 1 15:45:03 2018 From: fawaz.moh.ibraheem at gmail.com (Fawaz Mohammed) Date: Thu, 1 Feb 2018 19:45:03 +0400 Subject: [Openstack] Openstack neutron with ASR1k In-Reply-To: References: Message-ID: For TripleO deployment, you'll need: A separate node or VM as UnderCloud. 3 control nodes for HA, 2 controllers can't form HA by default, this is to avoid split-brain scenarios. For compute nodes, I recommend to have 2 x 10G ports, one for O&M and the other one for tenants. And a separate 1G for BMC / IPMI. --- Regards, Fawaz Mohammed On Thu, Feb 1, 2018 at 7:32 PM, Satish Patel wrote: > We are CentOS7 shop that is why i have started playing with TripleO on > my VMware environment to get familiar. what do you suggest for 20 node > cluster? we don't have any storage requirement all we need CPU and > Memory to run our application, currently all application running on > VMware and i want to migrate them over Openstack and do some rapid > deployment. > > This is what i am planning (let me if i am wrong) > > 2 controller node (in HA) > 2 Cisco ASR1k (L3 function in HA) > 20 compute node with 10G nic and lots of memory and many cpu cores. > > Anything else which i am missing? > > On Thu, Feb 1, 2018 at 10:15 AM, Fawaz Mohammed > wrote: > > Hi Satish, > > > > TripleO (Supported on CentOS and RHEL) and Juju (Supported on Ubuntu) [1] > > are the most used OpenStack deployment tools. > > > > From the deployment perspectives, you need to know your cloud setup and > the > > required plugins to decide which deployment you will choose. As each tool > > has a pre-integrated plugging out of the box. > > > > > > [1] > > https://docs.openstack.org/charm-deployment-guide/latest/ > install-juju.html > > --- > > Regards, > > Fawaz Mohammed > > > > On Thu, Feb 1, 2018 at 5:04 PM, Satish Patel > wrote: > >> > >> Hi Fawaz, > >> > >> Great so you think I'm doing right thing to use ASR1k for my L3 > function? > >> > >> I'm using DVR on my lab with 3 node cluster and it's kinda complicated > to > >> troubleshoot sometime. > >> > >> We are planning to build 20 node cluster for start and started playing > >> with tripleo to deploy and having lots of issue with tripleo too :( I > was > >> also thinking to use mirantis but they stopped development. > >> > >> Based on your experience what most of folks use to deploy middle size > >> cloud? What about you? > >> > >> > >> > >> > >> > >> > >> > >> Sent from my iPhone > >> > >> On Jan 31, 2018, at 11:48 PM, Fawaz Mohammed > >> wrote: > >> > >> http://www.jimmdenton.com/networking-cisco-asr-install/ can beet your > >> requirements. I have another plugin in production from different vendor > for > >> the same purpose. and it works perfect. > >> > >> Regarding your question about the license, usually, there is no license > >> for such plugins. > >> > >> I've no production experience with DVR, and I don't recommend it in > medium > >> to large environment. > >> > >> On Thu, Feb 1, 2018 at 2:58 AM, Satish Patel > wrote: > >>> > >>> What about this? http://www.jimmdenton.com/ > networking-cisco-asr-part-two/ > >>> > >>> ML2 does use ASR too, > >>> > >>> Just curious what people mostly use in production? are they use DVR or > >>> some kind of hardware for L3? > >>> > >>> > >>> On Wed, Jan 31, 2018 at 4:02 PM, Fawaz Mohammed > >>> wrote: > >>> > Hi Santish, > >>> > > >>> > In my knowlege, Cisco has ml2 driver for Nexus only. > >>> > > >>> > So, if you have requirements for dynamic L3 provisioning / > >>> > configuration, > >>> > it's better to go with SDN solution. > >>> > > >>> > On Jan 31, 2018 11:39 PM, "Satish Patel" > wrote: > >>> >> > >>> >> So no one using ASR 1001 for Openstack? > >>> >> > >>> >> On Tue, Jan 30, 2018 at 11:21 PM, Satish Patel < > satish.txt at gmail.com> > >>> >> wrote: > >>> >> > Folks, > >>> >> > > >>> >> > We are planning to deploy production style private cloud and > >>> >> > gathering > >>> >> > information about what we should use and why and i came across > with > >>> >> > couple of document related network node criticality and > performance > >>> >> > issue and many folks suggesting following > >>> >> > > >>> >> > 1. DVR (it seem complicated after reading, also need lots of > public > >>> >> > IP) > >>> >> > 2. Use ASR1k centralized router to use for L3 function (any idea > >>> >> > what > >>> >> > model should be good? or do we need any licensing to integrate > with > >>> >> > openstack?) > >>> >> > > >>> >> > Would like to get some input from folks who already using > openstack > >>> >> > in > >>> >> > production and would like to know what kind of deployment they > pick > >>> >> > for network/neutron performance? > >>> >> > >>> >> _______________________________________________ > >>> >> Mailing list: > >>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > >>> >> Post to : openstack at lists.openstack.org > >>> >> Unsubscribe : > >>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > >> > >> > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From satish.txt at gmail.com Thu Feb 1 16:07:10 2018 From: satish.txt at gmail.com (Satish Patel) Date: Thu, 1 Feb 2018 11:07:10 -0500 Subject: [Openstack] Openstack neutron with ASR1k In-Reply-To: References: Message-ID: Thanks for info i will keep in mind, Can we use VM for controller node? or 1 physical and 2 VM controller node, just a thought On Thu, Feb 1, 2018 at 10:45 AM, Fawaz Mohammed wrote: > For TripleO deployment, you'll need: > A separate node or VM as UnderCloud. > 3 control nodes for HA, 2 controllers can't form HA by default, this is to > avoid split-brain scenarios. > For compute nodes, I recommend to have 2 x 10G ports, one for O&M and the > other one for tenants. And a separate 1G for BMC / IPMI. > > --- > Regards, > Fawaz Mohammed > > > On Thu, Feb 1, 2018 at 7:32 PM, Satish Patel wrote: >> >> We are CentOS7 shop that is why i have started playing with TripleO on >> my VMware environment to get familiar. what do you suggest for 20 node >> cluster? we don't have any storage requirement all we need CPU and >> Memory to run our application, currently all application running on >> VMware and i want to migrate them over Openstack and do some rapid >> deployment. >> >> This is what i am planning (let me if i am wrong) >> >> 2 controller node (in HA) >> 2 Cisco ASR1k (L3 function in HA) >> 20 compute node with 10G nic and lots of memory and many cpu cores. >> >> Anything else which i am missing? >> >> On Thu, Feb 1, 2018 at 10:15 AM, Fawaz Mohammed >> wrote: >> > Hi Satish, >> > >> > TripleO (Supported on CentOS and RHEL) and Juju (Supported on Ubuntu) >> > [1] >> > are the most used OpenStack deployment tools. >> > >> > From the deployment perspectives, you need to know your cloud setup and >> > the >> > required plugins to decide which deployment you will choose. As each >> > tool >> > has a pre-integrated plugging out of the box. >> > >> > >> > [1] >> > >> > https://docs.openstack.org/charm-deployment-guide/latest/install-juju.html >> > --- >> > Regards, >> > Fawaz Mohammed >> > >> > On Thu, Feb 1, 2018 at 5:04 PM, Satish Patel >> > wrote: >> >> >> >> Hi Fawaz, >> >> >> >> Great so you think I'm doing right thing to use ASR1k for my L3 >> >> function? >> >> >> >> I'm using DVR on my lab with 3 node cluster and it's kinda complicated >> >> to >> >> troubleshoot sometime. >> >> >> >> We are planning to build 20 node cluster for start and started playing >> >> with tripleo to deploy and having lots of issue with tripleo too :( I >> >> was >> >> also thinking to use mirantis but they stopped development. >> >> >> >> Based on your experience what most of folks use to deploy middle size >> >> cloud? What about you? >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> Sent from my iPhone >> >> >> >> On Jan 31, 2018, at 11:48 PM, Fawaz Mohammed >> >> wrote: >> >> >> >> http://www.jimmdenton.com/networking-cisco-asr-install/ can beet your >> >> requirements. I have another plugin in production from different vendor >> >> for >> >> the same purpose. and it works perfect. >> >> >> >> Regarding your question about the license, usually, there is no license >> >> for such plugins. >> >> >> >> I've no production experience with DVR, and I don't recommend it in >> >> medium >> >> to large environment. >> >> >> >> On Thu, Feb 1, 2018 at 2:58 AM, Satish Patel >> >> wrote: >> >>> >> >>> What about this? >> >>> http://www.jimmdenton.com/networking-cisco-asr-part-two/ >> >>> >> >>> ML2 does use ASR too, >> >>> >> >>> Just curious what people mostly use in production? are they use DVR or >> >>> some kind of hardware for L3? >> >>> >> >>> >> >>> On Wed, Jan 31, 2018 at 4:02 PM, Fawaz Mohammed >> >>> wrote: >> >>> > Hi Santish, >> >>> > >> >>> > In my knowlege, Cisco has ml2 driver for Nexus only. >> >>> > >> >>> > So, if you have requirements for dynamic L3 provisioning / >> >>> > configuration, >> >>> > it's better to go with SDN solution. >> >>> > >> >>> > On Jan 31, 2018 11:39 PM, "Satish Patel" >> >>> > wrote: >> >>> >> >> >>> >> So no one using ASR 1001 for Openstack? >> >>> >> >> >>> >> On Tue, Jan 30, 2018 at 11:21 PM, Satish Patel >> >>> >> >> >>> >> wrote: >> >>> >> > Folks, >> >>> >> > >> >>> >> > We are planning to deploy production style private cloud and >> >>> >> > gathering >> >>> >> > information about what we should use and why and i came across >> >>> >> > with >> >>> >> > couple of document related network node criticality and >> >>> >> > performance >> >>> >> > issue and many folks suggesting following >> >>> >> > >> >>> >> > 1. DVR (it seem complicated after reading, also need lots of >> >>> >> > public >> >>> >> > IP) >> >>> >> > 2. Use ASR1k centralized router to use for L3 function (any idea >> >>> >> > what >> >>> >> > model should be good? or do we need any licensing to integrate >> >>> >> > with >> >>> >> > openstack?) >> >>> >> > >> >>> >> > Would like to get some input from folks who already using >> >>> >> > openstack >> >>> >> > in >> >>> >> > production and would like to know what kind of deployment they >> >>> >> > pick >> >>> >> > for network/neutron performance? >> >>> >> >> >>> >> _______________________________________________ >> >>> >> Mailing list: >> >>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >> >>> >> Post to : openstack at lists.openstack.org >> >>> >> Unsubscribe : >> >>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >> >> >> >> >> > > > From fawaz.moh.ibraheem at gmail.com Thu Feb 1 16:21:09 2018 From: fawaz.moh.ibraheem at gmail.com (Fawaz Mohammed) Date: Thu, 1 Feb 2018 20:21:09 +0400 Subject: [Openstack] Openstack neutron with ASR1k In-Reply-To: References: Message-ID: On TripleO deployment, it's a challenge to use VMs for any overclode role (controller, compute or storage). As undercloud uses BMC / IPMI to control the power if bare metal during the deployment. --- Regards, Fawaz Mohammed On Feb 1, 2018 8:07 PM, "Satish Patel" wrote: Thanks for info i will keep in mind, Can we use VM for controller node? or 1 physical and 2 VM controller node, just a thought On Thu, Feb 1, 2018 at 10:45 AM, Fawaz Mohammed wrote: > For TripleO deployment, you'll need: > A separate node or VM as UnderCloud. > 3 control nodes for HA, 2 controllers can't form HA by default, this is to > avoid split-brain scenarios. > For compute nodes, I recommend to have 2 x 10G ports, one for O&M and the > other one for tenants. And a separate 1G for BMC / IPMI. > > --- > Regards, > Fawaz Mohammed > > > On Thu, Feb 1, 2018 at 7:32 PM, Satish Patel wrote: >> >> We are CentOS7 shop that is why i have started playing with TripleO on >> my VMware environment to get familiar. what do you suggest for 20 node >> cluster? we don't have any storage requirement all we need CPU and >> Memory to run our application, currently all application running on >> VMware and i want to migrate them over Openstack and do some rapid >> deployment. >> >> This is what i am planning (let me if i am wrong) >> >> 2 controller node (in HA) >> 2 Cisco ASR1k (L3 function in HA) >> 20 compute node with 10G nic and lots of memory and many cpu cores. >> >> Anything else which i am missing? >> >> On Thu, Feb 1, 2018 at 10:15 AM, Fawaz Mohammed >> wrote: >> > Hi Satish, >> > >> > TripleO (Supported on CentOS and RHEL) and Juju (Supported on Ubuntu) >> > [1] >> > are the most used OpenStack deployment tools. >> > >> > From the deployment perspectives, you need to know your cloud setup and >> > the >> > required plugins to decide which deployment you will choose. As each >> > tool >> > has a pre-integrated plugging out of the box. >> > >> > >> > [1] >> > >> > https://docs.openstack.org/charm-deployment-guide/latest/ install-juju.html >> > --- >> > Regards, >> > Fawaz Mohammed >> > >> > On Thu, Feb 1, 2018 at 5:04 PM, Satish Patel >> > wrote: >> >> >> >> Hi Fawaz, >> >> >> >> Great so you think I'm doing right thing to use ASR1k for my L3 >> >> function? >> >> >> >> I'm using DVR on my lab with 3 node cluster and it's kinda complicated >> >> to >> >> troubleshoot sometime. >> >> >> >> We are planning to build 20 node cluster for start and started playing >> >> with tripleo to deploy and having lots of issue with tripleo too :( I >> >> was >> >> also thinking to use mirantis but they stopped development. >> >> >> >> Based on your experience what most of folks use to deploy middle size >> >> cloud? What about you? >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> Sent from my iPhone >> >> >> >> On Jan 31, 2018, at 11:48 PM, Fawaz Mohammed >> >> wrote: >> >> >> >> http://www.jimmdenton.com/networking-cisco-asr-install/ can beet your >> >> requirements. I have another plugin in production from different vendor >> >> for >> >> the same purpose. and it works perfect. >> >> >> >> Regarding your question about the license, usually, there is no license >> >> for such plugins. >> >> >> >> I've no production experience with DVR, and I don't recommend it in >> >> medium >> >> to large environment. >> >> >> >> On Thu, Feb 1, 2018 at 2:58 AM, Satish Patel >> >> wrote: >> >>> >> >>> What about this? >> >>> http://www.jimmdenton.com/networking-cisco-asr-part-two/ >> >>> >> >>> ML2 does use ASR too, >> >>> >> >>> Just curious what people mostly use in production? are they use DVR or >> >>> some kind of hardware for L3? >> >>> >> >>> >> >>> On Wed, Jan 31, 2018 at 4:02 PM, Fawaz Mohammed >> >>> wrote: >> >>> > Hi Santish, >> >>> > >> >>> > In my knowlege, Cisco has ml2 driver for Nexus only. >> >>> > >> >>> > So, if you have requirements for dynamic L3 provisioning / >> >>> > configuration, >> >>> > it's better to go with SDN solution. >> >>> > >> >>> > On Jan 31, 2018 11:39 PM, "Satish Patel" >> >>> > wrote: >> >>> >> >> >>> >> So no one using ASR 1001 for Openstack? >> >>> >> >> >>> >> On Tue, Jan 30, 2018 at 11:21 PM, Satish Patel >> >>> >> >> >>> >> wrote: >> >>> >> > Folks, >> >>> >> > >> >>> >> > We are planning to deploy production style private cloud and >> >>> >> > gathering >> >>> >> > information about what we should use and why and i came across >> >>> >> > with >> >>> >> > couple of document related network node criticality and >> >>> >> > performance >> >>> >> > issue and many folks suggesting following >> >>> >> > >> >>> >> > 1. DVR (it seem complicated after reading, also need lots of >> >>> >> > public >> >>> >> > IP) >> >>> >> > 2. Use ASR1k centralized router to use for L3 function (any idea >> >>> >> > what >> >>> >> > model should be good? or do we need any licensing to integrate >> >>> >> > with >> >>> >> > openstack?) >> >>> >> > >> >>> >> > Would like to get some input from folks who already using >> >>> >> > openstack >> >>> >> > in >> >>> >> > production and would like to know what kind of deployment they >> >>> >> > pick >> >>> >> > for network/neutron performance? >> >>> >> >> >>> >> _______________________________________________ >> >>> >> Mailing list: >> >>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >> >>> >> Post to : openstack at lists.openstack.org >> >>> >> Unsubscribe : >> >>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >> >> >> >> >> > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From satish.txt at gmail.com Thu Feb 1 16:23:41 2018 From: satish.txt at gmail.com (Satish Patel) Date: Thu, 1 Feb 2018 11:23:41 -0500 Subject: [Openstack] Openstack neutron with ASR1k In-Reply-To: References: Message-ID: You have a point there! Got it, One more question what will happened if deploy my cloud and second day my undercloud (TripleO) node is dead? is it going to impact my overcloud production? because all recipe cooked inside undercloud and now its no more. Do you think i need to keep backup of undercloud? On Thu, Feb 1, 2018 at 11:21 AM, Fawaz Mohammed wrote: > On TripleO deployment, it's a challenge to use VMs for any overclode role > (controller, compute or storage). As undercloud uses BMC / IPMI to control > the power if bare metal during the deployment. > > --- > Regards, > Fawaz Mohammed > > > On Feb 1, 2018 8:07 PM, "Satish Patel" wrote: > > Thanks for info i will keep in mind, Can we use VM for controller > node? or 1 physical and 2 VM controller node, just a thought > > On Thu, Feb 1, 2018 at 10:45 AM, Fawaz Mohammed > wrote: >> For TripleO deployment, you'll need: >> A separate node or VM as UnderCloud. >> 3 control nodes for HA, 2 controllers can't form HA by default, this is to >> avoid split-brain scenarios. >> For compute nodes, I recommend to have 2 x 10G ports, one for O&M and the >> other one for tenants. And a separate 1G for BMC / IPMI. >> >> --- >> Regards, >> Fawaz Mohammed >> >> >> On Thu, Feb 1, 2018 at 7:32 PM, Satish Patel wrote: >>> >>> We are CentOS7 shop that is why i have started playing with TripleO on >>> my VMware environment to get familiar. what do you suggest for 20 node >>> cluster? we don't have any storage requirement all we need CPU and >>> Memory to run our application, currently all application running on >>> VMware and i want to migrate them over Openstack and do some rapid >>> deployment. >>> >>> This is what i am planning (let me if i am wrong) >>> >>> 2 controller node (in HA) >>> 2 Cisco ASR1k (L3 function in HA) >>> 20 compute node with 10G nic and lots of memory and many cpu cores. >>> >>> Anything else which i am missing? >>> >>> On Thu, Feb 1, 2018 at 10:15 AM, Fawaz Mohammed >>> wrote: >>> > Hi Satish, >>> > >>> > TripleO (Supported on CentOS and RHEL) and Juju (Supported on Ubuntu) >>> > [1] >>> > are the most used OpenStack deployment tools. >>> > >>> > From the deployment perspectives, you need to know your cloud setup and >>> > the >>> > required plugins to decide which deployment you will choose. As each >>> > tool >>> > has a pre-integrated plugging out of the box. >>> > >>> > >>> > [1] >>> > >>> > >>> > https://docs.openstack.org/charm-deployment-guide/latest/install-juju.html >>> > --- >>> > Regards, >>> > Fawaz Mohammed >>> > >>> > On Thu, Feb 1, 2018 at 5:04 PM, Satish Patel >>> > wrote: >>> >> >>> >> Hi Fawaz, >>> >> >>> >> Great so you think I'm doing right thing to use ASR1k for my L3 >>> >> function? >>> >> >>> >> I'm using DVR on my lab with 3 node cluster and it's kinda complicated >>> >> to >>> >> troubleshoot sometime. >>> >> >>> >> We are planning to build 20 node cluster for start and started playing >>> >> with tripleo to deploy and having lots of issue with tripleo too :( I >>> >> was >>> >> also thinking to use mirantis but they stopped development. >>> >> >>> >> Based on your experience what most of folks use to deploy middle size >>> >> cloud? What about you? >>> >> >>> >> >>> >> >>> >> >>> >> >>> >> >>> >> >>> >> Sent from my iPhone >>> >> >>> >> On Jan 31, 2018, at 11:48 PM, Fawaz Mohammed >>> >> wrote: >>> >> >>> >> http://www.jimmdenton.com/networking-cisco-asr-install/ can beet your >>> >> requirements. I have another plugin in production from different >>> >> vendor >>> >> for >>> >> the same purpose. and it works perfect. >>> >> >>> >> Regarding your question about the license, usually, there is no >>> >> license >>> >> for such plugins. >>> >> >>> >> I've no production experience with DVR, and I don't recommend it in >>> >> medium >>> >> to large environment. >>> >> >>> >> On Thu, Feb 1, 2018 at 2:58 AM, Satish Patel >>> >> wrote: >>> >>> >>> >>> What about this? >>> >>> http://www.jimmdenton.com/networking-cisco-asr-part-two/ >>> >>> >>> >>> ML2 does use ASR too, >>> >>> >>> >>> Just curious what people mostly use in production? are they use DVR >>> >>> or >>> >>> some kind of hardware for L3? >>> >>> >>> >>> >>> >>> On Wed, Jan 31, 2018 at 4:02 PM, Fawaz Mohammed >>> >>> wrote: >>> >>> > Hi Santish, >>> >>> > >>> >>> > In my knowlege, Cisco has ml2 driver for Nexus only. >>> >>> > >>> >>> > So, if you have requirements for dynamic L3 provisioning / >>> >>> > configuration, >>> >>> > it's better to go with SDN solution. >>> >>> > >>> >>> > On Jan 31, 2018 11:39 PM, "Satish Patel" >>> >>> > wrote: >>> >>> >> >>> >>> >> So no one using ASR 1001 for Openstack? >>> >>> >> >>> >>> >> On Tue, Jan 30, 2018 at 11:21 PM, Satish Patel >>> >>> >> >>> >>> >> wrote: >>> >>> >> > Folks, >>> >>> >> > >>> >>> >> > We are planning to deploy production style private cloud and >>> >>> >> > gathering >>> >>> >> > information about what we should use and why and i came across >>> >>> >> > with >>> >>> >> > couple of document related network node criticality and >>> >>> >> > performance >>> >>> >> > issue and many folks suggesting following >>> >>> >> > >>> >>> >> > 1. DVR (it seem complicated after reading, also need lots of >>> >>> >> > public >>> >>> >> > IP) >>> >>> >> > 2. Use ASR1k centralized router to use for L3 function (any >>> >>> >> > idea >>> >>> >> > what >>> >>> >> > model should be good? or do we need any licensing to integrate >>> >>> >> > with >>> >>> >> > openstack?) >>> >>> >> > >>> >>> >> > Would like to get some input from folks who already using >>> >>> >> > openstack >>> >>> >> > in >>> >>> >> > production and would like to know what kind of deployment they >>> >>> >> > pick >>> >>> >> > for network/neutron performance? >>> >>> >> >>> >>> >> _______________________________________________ >>> >>> >> Mailing list: >>> >>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >>> >>> >> Post to : openstack at lists.openstack.org >>> >>> >> Unsubscribe : >>> >>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >>> >> >>> >> >>> > >> >> > > From fawaz.moh.ibraheem at gmail.com Thu Feb 1 16:38:06 2018 From: fawaz.moh.ibraheem at gmail.com (Fawaz Mohammed) Date: Thu, 1 Feb 2018 20:38:06 +0400 Subject: [Openstack] Openstack neutron with ASR1k In-Reply-To: References: Message-ID: In case your undercloud is dead, there will no be any impact in your overcloud, but you will not be able to perform day 2 operations on your tripleO environment, such as scale-out / scale-in the infrastructure, update / upgrade your overcloud. For that, it's recommended to backup the undercloud at least after each operation in it. --- Regards, Fawaz Mohammed On Feb 1, 2018 8:23 PM, "Satish Patel" wrote: You have a point there! Got it, One more question what will happened if deploy my cloud and second day my undercloud (TripleO) node is dead? is it going to impact my overcloud production? because all recipe cooked inside undercloud and now its no more. Do you think i need to keep backup of undercloud? On Thu, Feb 1, 2018 at 11:21 AM, Fawaz Mohammed wrote: > On TripleO deployment, it's a challenge to use VMs for any overclode role > (controller, compute or storage). As undercloud uses BMC / IPMI to control > the power if bare metal during the deployment. > > --- > Regards, > Fawaz Mohammed > > > On Feb 1, 2018 8:07 PM, "Satish Patel" wrote: > > Thanks for info i will keep in mind, Can we use VM for controller > node? or 1 physical and 2 VM controller node, just a thought > > On Thu, Feb 1, 2018 at 10:45 AM, Fawaz Mohammed > wrote: >> For TripleO deployment, you'll need: >> A separate node or VM as UnderCloud. >> 3 control nodes for HA, 2 controllers can't form HA by default, this is to >> avoid split-brain scenarios. >> For compute nodes, I recommend to have 2 x 10G ports, one for O&M and the >> other one for tenants. And a separate 1G for BMC / IPMI. >> >> --- >> Regards, >> Fawaz Mohammed >> >> >> On Thu, Feb 1, 2018 at 7:32 PM, Satish Patel wrote: >>> >>> We are CentOS7 shop that is why i have started playing with TripleO on >>> my VMware environment to get familiar. what do you suggest for 20 node >>> cluster? we don't have any storage requirement all we need CPU and >>> Memory to run our application, currently all application running on >>> VMware and i want to migrate them over Openstack and do some rapid >>> deployment. >>> >>> This is what i am planning (let me if i am wrong) >>> >>> 2 controller node (in HA) >>> 2 Cisco ASR1k (L3 function in HA) >>> 20 compute node with 10G nic and lots of memory and many cpu cores. >>> >>> Anything else which i am missing? >>> >>> On Thu, Feb 1, 2018 at 10:15 AM, Fawaz Mohammed >>> wrote: >>> > Hi Satish, >>> > >>> > TripleO (Supported on CentOS and RHEL) and Juju (Supported on Ubuntu) >>> > [1] >>> > are the most used OpenStack deployment tools. >>> > >>> > From the deployment perspectives, you need to know your cloud setup and >>> > the >>> > required plugins to decide which deployment you will choose. As each >>> > tool >>> > has a pre-integrated plugging out of the box. >>> > >>> > >>> > [1] >>> > >>> > >>> > https://docs.openstack.org/charm-deployment-guide/latest/ install-juju.html >>> > --- >>> > Regards, >>> > Fawaz Mohammed >>> > >>> > On Thu, Feb 1, 2018 at 5:04 PM, Satish Patel >>> > wrote: >>> >> >>> >> Hi Fawaz, >>> >> >>> >> Great so you think I'm doing right thing to use ASR1k for my L3 >>> >> function? >>> >> >>> >> I'm using DVR on my lab with 3 node cluster and it's kinda complicated >>> >> to >>> >> troubleshoot sometime. >>> >> >>> >> We are planning to build 20 node cluster for start and started playing >>> >> with tripleo to deploy and having lots of issue with tripleo too :( I >>> >> was >>> >> also thinking to use mirantis but they stopped development. >>> >> >>> >> Based on your experience what most of folks use to deploy middle size >>> >> cloud? What about you? >>> >> >>> >> >>> >> >>> >> >>> >> >>> >> >>> >> >>> >> Sent from my iPhone >>> >> >>> >> On Jan 31, 2018, at 11:48 PM, Fawaz Mohammed >>> >> wrote: >>> >> >>> >> http://www.jimmdenton.com/networking-cisco-asr-install/ can beet your >>> >> requirements. I have another plugin in production from different >>> >> vendor >>> >> for >>> >> the same purpose. and it works perfect. >>> >> >>> >> Regarding your question about the license, usually, there is no >>> >> license >>> >> for such plugins. >>> >> >>> >> I've no production experience with DVR, and I don't recommend it in >>> >> medium >>> >> to large environment. >>> >> >>> >> On Thu, Feb 1, 2018 at 2:58 AM, Satish Patel >>> >> wrote: >>> >>> >>> >>> What about this? >>> >>> http://www.jimmdenton.com/networking-cisco-asr-part-two/ >>> >>> >>> >>> ML2 does use ASR too, >>> >>> >>> >>> Just curious what people mostly use in production? are they use DVR >>> >>> or >>> >>> some kind of hardware for L3? >>> >>> >>> >>> >>> >>> On Wed, Jan 31, 2018 at 4:02 PM, Fawaz Mohammed >>> >>> wrote: >>> >>> > Hi Santish, >>> >>> > >>> >>> > In my knowlege, Cisco has ml2 driver for Nexus only. >>> >>> > >>> >>> > So, if you have requirements for dynamic L3 provisioning / >>> >>> > configuration, >>> >>> > it's better to go with SDN solution. >>> >>> > >>> >>> > On Jan 31, 2018 11:39 PM, "Satish Patel" >>> >>> > wrote: >>> >>> >> >>> >>> >> So no one using ASR 1001 for Openstack? >>> >>> >> >>> >>> >> On Tue, Jan 30, 2018 at 11:21 PM, Satish Patel >>> >>> >> >>> >>> >> wrote: >>> >>> >> > Folks, >>> >>> >> > >>> >>> >> > We are planning to deploy production style private cloud and >>> >>> >> > gathering >>> >>> >> > information about what we should use and why and i came across >>> >>> >> > with >>> >>> >> > couple of document related network node criticality and >>> >>> >> > performance >>> >>> >> > issue and many folks suggesting following >>> >>> >> > >>> >>> >> > 1. DVR (it seem complicated after reading, also need lots of >>> >>> >> > public >>> >>> >> > IP) >>> >>> >> > 2. Use ASR1k centralized router to use for L3 function (any >>> >>> >> > idea >>> >>> >> > what >>> >>> >> > model should be good? or do we need any licensing to integrate >>> >>> >> > with >>> >>> >> > openstack?) >>> >>> >> > >>> >>> >> > Would like to get some input from folks who already using >>> >>> >> > openstack >>> >>> >> > in >>> >>> >> > production and would like to know what kind of deployment they >>> >>> >> > pick >>> >>> >> > for network/neutron performance? >>> >>> >> >>> >>> >> _______________________________________________ >>> >>> >> Mailing list: >>> >>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >>> >>> >> Post to : openstack at lists.openstack.org >>> >>> >> Unsubscribe : >>> >>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >>> >> >>> >> >>> > >> >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From satish.txt at gmail.com Thu Feb 1 16:49:37 2018 From: satish.txt at gmail.com (Satish Patel) Date: Thu, 1 Feb 2018 11:49:37 -0500 Subject: [Openstack] Openstack neutron with ASR1k In-Reply-To: References: Message-ID: Is it ok if i create undercloud on VMware because its easier to take snapshot and clone etc. ~S On Thu, Feb 1, 2018 at 11:38 AM, Fawaz Mohammed wrote: > In case your undercloud is dead, there will no be any impact in your > overcloud, but you will not be able to perform day 2 operations on your > tripleO environment, such as scale-out / scale-in the infrastructure, update > / upgrade your overcloud. > For that, it's recommended to backup the undercloud at least after each > operation in it. > > --- > Regards, > Fawaz Mohammed > > > On Feb 1, 2018 8:23 PM, "Satish Patel" wrote: > > You have a point there! Got it, > > One more question what will happened if deploy my cloud and second day > my undercloud (TripleO) node is dead? is it going to impact my > overcloud production? because all recipe cooked inside undercloud and > now its no more. > > Do you think i need to keep backup of undercloud? > > On Thu, Feb 1, 2018 at 11:21 AM, Fawaz Mohammed > wrote: >> On TripleO deployment, it's a challenge to use VMs for any overclode role >> (controller, compute or storage). As undercloud uses BMC / IPMI to control >> the power if bare metal during the deployment. >> >> --- >> Regards, >> Fawaz Mohammed >> >> >> On Feb 1, 2018 8:07 PM, "Satish Patel" wrote: >> >> Thanks for info i will keep in mind, Can we use VM for controller >> node? or 1 physical and 2 VM controller node, just a thought >> >> On Thu, Feb 1, 2018 at 10:45 AM, Fawaz Mohammed >> wrote: >>> For TripleO deployment, you'll need: >>> A separate node or VM as UnderCloud. >>> 3 control nodes for HA, 2 controllers can't form HA by default, this is >>> to >>> avoid split-brain scenarios. >>> For compute nodes, I recommend to have 2 x 10G ports, one for O&M and the >>> other one for tenants. And a separate 1G for BMC / IPMI. >>> >>> --- >>> Regards, >>> Fawaz Mohammed >>> >>> >>> On Thu, Feb 1, 2018 at 7:32 PM, Satish Patel >>> wrote: >>>> >>>> We are CentOS7 shop that is why i have started playing with TripleO on >>>> my VMware environment to get familiar. what do you suggest for 20 node >>>> cluster? we don't have any storage requirement all we need CPU and >>>> Memory to run our application, currently all application running on >>>> VMware and i want to migrate them over Openstack and do some rapid >>>> deployment. >>>> >>>> This is what i am planning (let me if i am wrong) >>>> >>>> 2 controller node (in HA) >>>> 2 Cisco ASR1k (L3 function in HA) >>>> 20 compute node with 10G nic and lots of memory and many cpu cores. >>>> >>>> Anything else which i am missing? >>>> >>>> On Thu, Feb 1, 2018 at 10:15 AM, Fawaz Mohammed >>>> wrote: >>>> > Hi Satish, >>>> > >>>> > TripleO (Supported on CentOS and RHEL) and Juju (Supported on Ubuntu) >>>> > [1] >>>> > are the most used OpenStack deployment tools. >>>> > >>>> > From the deployment perspectives, you need to know your cloud setup >>>> > and >>>> > the >>>> > required plugins to decide which deployment you will choose. As each >>>> > tool >>>> > has a pre-integrated plugging out of the box. >>>> > >>>> > >>>> > [1] >>>> > >>>> > >>>> > >>>> > https://docs.openstack.org/charm-deployment-guide/latest/install-juju.html >>>> > --- >>>> > Regards, >>>> > Fawaz Mohammed >>>> > >>>> > On Thu, Feb 1, 2018 at 5:04 PM, Satish Patel >>>> > wrote: >>>> >> >>>> >> Hi Fawaz, >>>> >> >>>> >> Great so you think I'm doing right thing to use ASR1k for my L3 >>>> >> function? >>>> >> >>>> >> I'm using DVR on my lab with 3 node cluster and it's kinda >>>> >> complicated >>>> >> to >>>> >> troubleshoot sometime. >>>> >> >>>> >> We are planning to build 20 node cluster for start and started >>>> >> playing >>>> >> with tripleo to deploy and having lots of issue with tripleo too :( >>>> >> I >>>> >> was >>>> >> also thinking to use mirantis but they stopped development. >>>> >> >>>> >> Based on your experience what most of folks use to deploy middle size >>>> >> cloud? What about you? >>>> >> >>>> >> >>>> >> >>>> >> >>>> >> >>>> >> >>>> >> >>>> >> Sent from my iPhone >>>> >> >>>> >> On Jan 31, 2018, at 11:48 PM, Fawaz Mohammed >>>> >> wrote: >>>> >> >>>> >> http://www.jimmdenton.com/networking-cisco-asr-install/ can beet your >>>> >> requirements. I have another plugin in production from different >>>> >> vendor >>>> >> for >>>> >> the same purpose. and it works perfect. >>>> >> >>>> >> Regarding your question about the license, usually, there is no >>>> >> license >>>> >> for such plugins. >>>> >> >>>> >> I've no production experience with DVR, and I don't recommend it in >>>> >> medium >>>> >> to large environment. >>>> >> >>>> >> On Thu, Feb 1, 2018 at 2:58 AM, Satish Patel >>>> >> wrote: >>>> >>> >>>> >>> What about this? >>>> >>> http://www.jimmdenton.com/networking-cisco-asr-part-two/ >>>> >>> >>>> >>> ML2 does use ASR too, >>>> >>> >>>> >>> Just curious what people mostly use in production? are they use DVR >>>> >>> or >>>> >>> some kind of hardware for L3? >>>> >>> >>>> >>> >>>> >>> On Wed, Jan 31, 2018 at 4:02 PM, Fawaz Mohammed >>>> >>> wrote: >>>> >>> > Hi Santish, >>>> >>> > >>>> >>> > In my knowlege, Cisco has ml2 driver for Nexus only. >>>> >>> > >>>> >>> > So, if you have requirements for dynamic L3 provisioning / >>>> >>> > configuration, >>>> >>> > it's better to go with SDN solution. >>>> >>> > >>>> >>> > On Jan 31, 2018 11:39 PM, "Satish Patel" >>>> >>> > wrote: >>>> >>> >> >>>> >>> >> So no one using ASR 1001 for Openstack? >>>> >>> >> >>>> >>> >> On Tue, Jan 30, 2018 at 11:21 PM, Satish Patel >>>> >>> >> >>>> >>> >> wrote: >>>> >>> >> > Folks, >>>> >>> >> > >>>> >>> >> > We are planning to deploy production style private cloud and >>>> >>> >> > gathering >>>> >>> >> > information about what we should use and why and i came across >>>> >>> >> > with >>>> >>> >> > couple of document related network node criticality and >>>> >>> >> > performance >>>> >>> >> > issue and many folks suggesting following >>>> >>> >> > >>>> >>> >> > 1. DVR (it seem complicated after reading, also need lots of >>>> >>> >> > public >>>> >>> >> > IP) >>>> >>> >> > 2. Use ASR1k centralized router to use for L3 function (any >>>> >>> >> > idea >>>> >>> >> > what >>>> >>> >> > model should be good? or do we need any licensing to integrate >>>> >>> >> > with >>>> >>> >> > openstack?) >>>> >>> >> > >>>> >>> >> > Would like to get some input from folks who already using >>>> >>> >> > openstack >>>> >>> >> > in >>>> >>> >> > production and would like to know what kind of deployment they >>>> >>> >> > pick >>>> >>> >> > for network/neutron performance? >>>> >>> >> >>>> >>> >> _______________________________________________ >>>> >>> >> Mailing list: >>>> >>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >>>> >>> >> Post to : openstack at lists.openstack.org >>>> >>> >> Unsubscribe : >>>> >>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >>>> >> >>>> >> >>>> > >>> >>> >> >> > > From fawaz.moh.ibraheem at gmail.com Thu Feb 1 16:54:32 2018 From: fawaz.moh.ibraheem at gmail.com (Fawaz Mohammed) Date: Thu, 1 Feb 2018 20:54:32 +0400 Subject: [Openstack] Openstack neutron with ASR1k In-Reply-To: References: Message-ID: Yes, you can do that. On Feb 1, 2018 8:49 PM, "Satish Patel" wrote: > Is it ok if i create undercloud on VMware because its easier to take > snapshot and clone etc. > > ~S > > On Thu, Feb 1, 2018 at 11:38 AM, Fawaz Mohammed > wrote: > > In case your undercloud is dead, there will no be any impact in your > > overcloud, but you will not be able to perform day 2 operations on your > > tripleO environment, such as scale-out / scale-in the infrastructure, > update > > / upgrade your overcloud. > > For that, it's recommended to backup the undercloud at least after each > > operation in it. > > > > --- > > Regards, > > Fawaz Mohammed > > > > > > On Feb 1, 2018 8:23 PM, "Satish Patel" wrote: > > > > You have a point there! Got it, > > > > One more question what will happened if deploy my cloud and second day > > my undercloud (TripleO) node is dead? is it going to impact my > > overcloud production? because all recipe cooked inside undercloud and > > now its no more. > > > > Do you think i need to keep backup of undercloud? > > > > On Thu, Feb 1, 2018 at 11:21 AM, Fawaz Mohammed > > wrote: > >> On TripleO deployment, it's a challenge to use VMs for any overclode > role > >> (controller, compute or storage). As undercloud uses BMC / IPMI to > control > >> the power if bare metal during the deployment. > >> > >> --- > >> Regards, > >> Fawaz Mohammed > >> > >> > >> On Feb 1, 2018 8:07 PM, "Satish Patel" wrote: > >> > >> Thanks for info i will keep in mind, Can we use VM for controller > >> node? or 1 physical and 2 VM controller node, just a thought > >> > >> On Thu, Feb 1, 2018 at 10:45 AM, Fawaz Mohammed > >> wrote: > >>> For TripleO deployment, you'll need: > >>> A separate node or VM as UnderCloud. > >>> 3 control nodes for HA, 2 controllers can't form HA by default, this is > >>> to > >>> avoid split-brain scenarios. > >>> For compute nodes, I recommend to have 2 x 10G ports, one for O&M and > the > >>> other one for tenants. And a separate 1G for BMC / IPMI. > >>> > >>> --- > >>> Regards, > >>> Fawaz Mohammed > >>> > >>> > >>> On Thu, Feb 1, 2018 at 7:32 PM, Satish Patel > >>> wrote: > >>>> > >>>> We are CentOS7 shop that is why i have started playing with TripleO on > >>>> my VMware environment to get familiar. what do you suggest for 20 node > >>>> cluster? we don't have any storage requirement all we need CPU and > >>>> Memory to run our application, currently all application running on > >>>> VMware and i want to migrate them over Openstack and do some rapid > >>>> deployment. > >>>> > >>>> This is what i am planning (let me if i am wrong) > >>>> > >>>> 2 controller node (in HA) > >>>> 2 Cisco ASR1k (L3 function in HA) > >>>> 20 compute node with 10G nic and lots of memory and many cpu cores. > >>>> > >>>> Anything else which i am missing? > >>>> > >>>> On Thu, Feb 1, 2018 at 10:15 AM, Fawaz Mohammed > >>>> wrote: > >>>> > Hi Satish, > >>>> > > >>>> > TripleO (Supported on CentOS and RHEL) and Juju (Supported on > Ubuntu) > >>>> > [1] > >>>> > are the most used OpenStack deployment tools. > >>>> > > >>>> > From the deployment perspectives, you need to know your cloud setup > >>>> > and > >>>> > the > >>>> > required plugins to decide which deployment you will choose. As each > >>>> > tool > >>>> > has a pre-integrated plugging out of the box. > >>>> > > >>>> > > >>>> > [1] > >>>> > > >>>> > > >>>> > > >>>> > https://docs.openstack.org/charm-deployment-guide/latest/ > install-juju.html > >>>> > --- > >>>> > Regards, > >>>> > Fawaz Mohammed > >>>> > > >>>> > On Thu, Feb 1, 2018 at 5:04 PM, Satish Patel > >>>> > wrote: > >>>> >> > >>>> >> Hi Fawaz, > >>>> >> > >>>> >> Great so you think I'm doing right thing to use ASR1k for my L3 > >>>> >> function? > >>>> >> > >>>> >> I'm using DVR on my lab with 3 node cluster and it's kinda > >>>> >> complicated > >>>> >> to > >>>> >> troubleshoot sometime. > >>>> >> > >>>> >> We are planning to build 20 node cluster for start and started > >>>> >> playing > >>>> >> with tripleo to deploy and having lots of issue with tripleo too :( > >>>> >> I > >>>> >> was > >>>> >> also thinking to use mirantis but they stopped development. > >>>> >> > >>>> >> Based on your experience what most of folks use to deploy middle > size > >>>> >> cloud? What about you? > >>>> >> > >>>> >> > >>>> >> > >>>> >> > >>>> >> > >>>> >> > >>>> >> > >>>> >> Sent from my iPhone > >>>> >> > >>>> >> On Jan 31, 2018, at 11:48 PM, Fawaz Mohammed > >>>> >> wrote: > >>>> >> > >>>> >> http://www.jimmdenton.com/networking-cisco-asr-install/ can beet > your > >>>> >> requirements. I have another plugin in production from different > >>>> >> vendor > >>>> >> for > >>>> >> the same purpose. and it works perfect. > >>>> >> > >>>> >> Regarding your question about the license, usually, there is no > >>>> >> license > >>>> >> for such plugins. > >>>> >> > >>>> >> I've no production experience with DVR, and I don't recommend it in > >>>> >> medium > >>>> >> to large environment. > >>>> >> > >>>> >> On Thu, Feb 1, 2018 at 2:58 AM, Satish Patel > > >>>> >> wrote: > >>>> >>> > >>>> >>> What about this? > >>>> >>> http://www.jimmdenton.com/networking-cisco-asr-part-two/ > >>>> >>> > >>>> >>> ML2 does use ASR too, > >>>> >>> > >>>> >>> Just curious what people mostly use in production? are they use > DVR > >>>> >>> or > >>>> >>> some kind of hardware for L3? > >>>> >>> > >>>> >>> > >>>> >>> On Wed, Jan 31, 2018 at 4:02 PM, Fawaz Mohammed > >>>> >>> wrote: > >>>> >>> > Hi Santish, > >>>> >>> > > >>>> >>> > In my knowlege, Cisco has ml2 driver for Nexus only. > >>>> >>> > > >>>> >>> > So, if you have requirements for dynamic L3 provisioning / > >>>> >>> > configuration, > >>>> >>> > it's better to go with SDN solution. > >>>> >>> > > >>>> >>> > On Jan 31, 2018 11:39 PM, "Satish Patel" > >>>> >>> > wrote: > >>>> >>> >> > >>>> >>> >> So no one using ASR 1001 for Openstack? > >>>> >>> >> > >>>> >>> >> On Tue, Jan 30, 2018 at 11:21 PM, Satish Patel > >>>> >>> >> > >>>> >>> >> wrote: > >>>> >>> >> > Folks, > >>>> >>> >> > > >>>> >>> >> > We are planning to deploy production style private cloud and > >>>> >>> >> > gathering > >>>> >>> >> > information about what we should use and why and i came > across > >>>> >>> >> > with > >>>> >>> >> > couple of document related network node criticality and > >>>> >>> >> > performance > >>>> >>> >> > issue and many folks suggesting following > >>>> >>> >> > > >>>> >>> >> > 1. DVR (it seem complicated after reading, also need lots of > >>>> >>> >> > public > >>>> >>> >> > IP) > >>>> >>> >> > 2. Use ASR1k centralized router to use for L3 function (any > >>>> >>> >> > idea > >>>> >>> >> > what > >>>> >>> >> > model should be good? or do we need any licensing to > integrate > >>>> >>> >> > with > >>>> >>> >> > openstack?) > >>>> >>> >> > > >>>> >>> >> > Would like to get some input from folks who already using > >>>> >>> >> > openstack > >>>> >>> >> > in > >>>> >>> >> > production and would like to know what kind of deployment > they > >>>> >>> >> > pick > >>>> >>> >> > for network/neutron performance? > >>>> >>> >> > >>>> >>> >> _______________________________________________ > >>>> >>> >> Mailing list: > >>>> >>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > >>>> >>> >> Post to : openstack at lists.openstack.org > >>>> >>> >> Unsubscribe : > >>>> >>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > >>>> >> > >>>> >> > >>>> > > >>> > >>> > >> > >> > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From satish.txt at gmail.com Thu Feb 1 17:08:05 2018 From: satish.txt at gmail.com (Satish Patel) Date: Thu, 1 Feb 2018 12:08:05 -0500 Subject: [Openstack] Openstack neutron with ASR1k In-Reply-To: References: Message-ID: Thanks! any specific version of TripleO i should use or openstack version for production? On Thu, Feb 1, 2018 at 11:54 AM, Fawaz Mohammed wrote: > Yes, you can do that. > > On Feb 1, 2018 8:49 PM, "Satish Patel" wrote: >> >> Is it ok if i create undercloud on VMware because its easier to take >> snapshot and clone etc. >> >> ~S >> >> On Thu, Feb 1, 2018 at 11:38 AM, Fawaz Mohammed >> wrote: >> > In case your undercloud is dead, there will no be any impact in your >> > overcloud, but you will not be able to perform day 2 operations on your >> > tripleO environment, such as scale-out / scale-in the infrastructure, >> > update >> > / upgrade your overcloud. >> > For that, it's recommended to backup the undercloud at least after each >> > operation in it. >> > >> > --- >> > Regards, >> > Fawaz Mohammed >> > >> > >> > On Feb 1, 2018 8:23 PM, "Satish Patel" wrote: >> > >> > You have a point there! Got it, >> > >> > One more question what will happened if deploy my cloud and second day >> > my undercloud (TripleO) node is dead? is it going to impact my >> > overcloud production? because all recipe cooked inside undercloud and >> > now its no more. >> > >> > Do you think i need to keep backup of undercloud? >> > >> > On Thu, Feb 1, 2018 at 11:21 AM, Fawaz Mohammed >> > wrote: >> >> On TripleO deployment, it's a challenge to use VMs for any overclode >> >> role >> >> (controller, compute or storage). As undercloud uses BMC / IPMI to >> >> control >> >> the power if bare metal during the deployment. >> >> >> >> --- >> >> Regards, >> >> Fawaz Mohammed >> >> >> >> >> >> On Feb 1, 2018 8:07 PM, "Satish Patel" wrote: >> >> >> >> Thanks for info i will keep in mind, Can we use VM for controller >> >> node? or 1 physical and 2 VM controller node, just a thought >> >> >> >> On Thu, Feb 1, 2018 at 10:45 AM, Fawaz Mohammed >> >> wrote: >> >>> For TripleO deployment, you'll need: >> >>> A separate node or VM as UnderCloud. >> >>> 3 control nodes for HA, 2 controllers can't form HA by default, this >> >>> is >> >>> to >> >>> avoid split-brain scenarios. >> >>> For compute nodes, I recommend to have 2 x 10G ports, one for O&M and >> >>> the >> >>> other one for tenants. And a separate 1G for BMC / IPMI. >> >>> >> >>> --- >> >>> Regards, >> >>> Fawaz Mohammed >> >>> >> >>> >> >>> On Thu, Feb 1, 2018 at 7:32 PM, Satish Patel >> >>> wrote: >> >>>> >> >>>> We are CentOS7 shop that is why i have started playing with TripleO >> >>>> on >> >>>> my VMware environment to get familiar. what do you suggest for 20 >> >>>> node >> >>>> cluster? we don't have any storage requirement all we need CPU and >> >>>> Memory to run our application, currently all application running on >> >>>> VMware and i want to migrate them over Openstack and do some rapid >> >>>> deployment. >> >>>> >> >>>> This is what i am planning (let me if i am wrong) >> >>>> >> >>>> 2 controller node (in HA) >> >>>> 2 Cisco ASR1k (L3 function in HA) >> >>>> 20 compute node with 10G nic and lots of memory and many cpu cores. >> >>>> >> >>>> Anything else which i am missing? >> >>>> >> >>>> On Thu, Feb 1, 2018 at 10:15 AM, Fawaz Mohammed >> >>>> wrote: >> >>>> > Hi Satish, >> >>>> > >> >>>> > TripleO (Supported on CentOS and RHEL) and Juju (Supported on >> >>>> > Ubuntu) >> >>>> > [1] >> >>>> > are the most used OpenStack deployment tools. >> >>>> > >> >>>> > From the deployment perspectives, you need to know your cloud setup >> >>>> > and >> >>>> > the >> >>>> > required plugins to decide which deployment you will choose. As >> >>>> > each >> >>>> > tool >> >>>> > has a pre-integrated plugging out of the box. >> >>>> > >> >>>> > >> >>>> > [1] >> >>>> > >> >>>> > >> >>>> > >> >>>> > >> >>>> > https://docs.openstack.org/charm-deployment-guide/latest/install-juju.html >> >>>> > --- >> >>>> > Regards, >> >>>> > Fawaz Mohammed >> >>>> > >> >>>> > On Thu, Feb 1, 2018 at 5:04 PM, Satish Patel >> >>>> > wrote: >> >>>> >> >> >>>> >> Hi Fawaz, >> >>>> >> >> >>>> >> Great so you think I'm doing right thing to use ASR1k for my L3 >> >>>> >> function? >> >>>> >> >> >>>> >> I'm using DVR on my lab with 3 node cluster and it's kinda >> >>>> >> complicated >> >>>> >> to >> >>>> >> troubleshoot sometime. >> >>>> >> >> >>>> >> We are planning to build 20 node cluster for start and started >> >>>> >> playing >> >>>> >> with tripleo to deploy and having lots of issue with tripleo too >> >>>> >> :( >> >>>> >> I >> >>>> >> was >> >>>> >> also thinking to use mirantis but they stopped development. >> >>>> >> >> >>>> >> Based on your experience what most of folks use to deploy middle >> >>>> >> size >> >>>> >> cloud? What about you? >> >>>> >> >> >>>> >> >> >>>> >> >> >>>> >> >> >>>> >> >> >>>> >> >> >>>> >> >> >>>> >> Sent from my iPhone >> >>>> >> >> >>>> >> On Jan 31, 2018, at 11:48 PM, Fawaz Mohammed >> >>>> >> wrote: >> >>>> >> >> >>>> >> http://www.jimmdenton.com/networking-cisco-asr-install/ can beet >> >>>> >> your >> >>>> >> requirements. I have another plugin in production from different >> >>>> >> vendor >> >>>> >> for >> >>>> >> the same purpose. and it works perfect. >> >>>> >> >> >>>> >> Regarding your question about the license, usually, there is no >> >>>> >> license >> >>>> >> for such plugins. >> >>>> >> >> >>>> >> I've no production experience with DVR, and I don't recommend it >> >>>> >> in >> >>>> >> medium >> >>>> >> to large environment. >> >>>> >> >> >>>> >> On Thu, Feb 1, 2018 at 2:58 AM, Satish Patel >> >>>> >> >> >>>> >> wrote: >> >>>> >>> >> >>>> >>> What about this? >> >>>> >>> http://www.jimmdenton.com/networking-cisco-asr-part-two/ >> >>>> >>> >> >>>> >>> ML2 does use ASR too, >> >>>> >>> >> >>>> >>> Just curious what people mostly use in production? are they use >> >>>> >>> DVR >> >>>> >>> or >> >>>> >>> some kind of hardware for L3? >> >>>> >>> >> >>>> >>> >> >>>> >>> On Wed, Jan 31, 2018 at 4:02 PM, Fawaz Mohammed >> >>>> >>> wrote: >> >>>> >>> > Hi Santish, >> >>>> >>> > >> >>>> >>> > In my knowlege, Cisco has ml2 driver for Nexus only. >> >>>> >>> > >> >>>> >>> > So, if you have requirements for dynamic L3 provisioning / >> >>>> >>> > configuration, >> >>>> >>> > it's better to go with SDN solution. >> >>>> >>> > >> >>>> >>> > On Jan 31, 2018 11:39 PM, "Satish Patel" >> >>>> >>> > wrote: >> >>>> >>> >> >> >>>> >>> >> So no one using ASR 1001 for Openstack? >> >>>> >>> >> >> >>>> >>> >> On Tue, Jan 30, 2018 at 11:21 PM, Satish Patel >> >>>> >>> >> >> >>>> >>> >> wrote: >> >>>> >>> >> > Folks, >> >>>> >>> >> > >> >>>> >>> >> > We are planning to deploy production style private cloud and >> >>>> >>> >> > gathering >> >>>> >>> >> > information about what we should use and why and i came >> >>>> >>> >> > across >> >>>> >>> >> > with >> >>>> >>> >> > couple of document related network node criticality and >> >>>> >>> >> > performance >> >>>> >>> >> > issue and many folks suggesting following >> >>>> >>> >> > >> >>>> >>> >> > 1. DVR (it seem complicated after reading, also need lots of >> >>>> >>> >> > public >> >>>> >>> >> > IP) >> >>>> >>> >> > 2. Use ASR1k centralized router to use for L3 function (any >> >>>> >>> >> > idea >> >>>> >>> >> > what >> >>>> >>> >> > model should be good? or do we need any licensing to >> >>>> >>> >> > integrate >> >>>> >>> >> > with >> >>>> >>> >> > openstack?) >> >>>> >>> >> > >> >>>> >>> >> > Would like to get some input from folks who already using >> >>>> >>> >> > openstack >> >>>> >>> >> > in >> >>>> >>> >> > production and would like to know what kind of deployment >> >>>> >>> >> > they >> >>>> >>> >> > pick >> >>>> >>> >> > for network/neutron performance? >> >>>> >>> >> >> >>>> >>> >> _______________________________________________ >> >>>> >>> >> Mailing list: >> >>>> >>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >> >>>> >>> >> Post to : openstack at lists.openstack.org >> >>>> >>> >> Unsubscribe : >> >>>> >>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >> >>>> >> >> >>>> >> >> >>>> > >> >>> >> >>> >> >> >> >> >> > >> > From fungi at yuggoth.org Thu Feb 1 19:21:03 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 1 Feb 2018 19:21:03 +0000 Subject: [Openstack] Openstack neutron with ASR1k In-Reply-To: References: Message-ID: <20180201192102.6tl6rtcuh2zzpji5@yuggoth.org> On 2018-02-01 19:15:51 +0400 (+0400), Fawaz Mohammed wrote: > TripleO (Supported on CentOS and RHEL) and Juju (Supported on > Ubuntu) [1] are the most used OpenStack deployment tools. [...] Most used in what sense? That statistic seems to contradict the results of the official OpenStack User Survey from April 2017 (page 42) at least, which claims that more deployments used Ansible (45%), Puppet (28%), Fuel (16%) and Chef (14%) than Juju (9%). TripleO didn't even have enough responses on that question to rank. https://www.openstack.org/assets/survey/April2017SurveyReport.pdf -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From fungi at yuggoth.org Thu Feb 1 19:32:30 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 1 Feb 2018 19:32:30 +0000 Subject: [Openstack] Openstack neutron with ASR1k In-Reply-To: <20180201192102.6tl6rtcuh2zzpji5@yuggoth.org> References: <20180201192102.6tl6rtcuh2zzpji5@yuggoth.org> Message-ID: <20180201193229.v44rebcc52clrukn@yuggoth.org> On 2018-02-01 19:21:03 +0000 (+0000), Jeremy Stanley wrote: > On 2018-02-01 19:15:51 +0400 (+0400), Fawaz Mohammed wrote: > > TripleO (Supported on CentOS and RHEL) and Juju (Supported on > > Ubuntu) [1] are the most used OpenStack deployment tools. > [...] > > Most used in what sense? That statistic seems to contradict the > results of the official OpenStack User Survey from April 2017 (page > 42) at least, which claims that more deployments used Ansible (45%), > Puppet (28%), Fuel (16%) and Chef (14%) than Juju (9%). TripleO > didn't even have enough responses on that question to rank. > > https://www.openstack.org/assets/survey/April2017SurveyReport.pdf Also, the analytics tool for the later November 2017 survey results gives basically the same ranking though the percentages changed slightly (and Fuel/Chef traded spots). https://www.openstack.org/analytics -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From satish.txt at gmail.com Thu Feb 1 19:58:46 2018 From: satish.txt at gmail.com (Satish Patel) Date: Thu, 1 Feb 2018 14:58:46 -0500 Subject: [Openstack] Openstack neutron with ASR1k In-Reply-To: <20180201192102.6tl6rtcuh2zzpji5@yuggoth.org> References: <20180201192102.6tl6rtcuh2zzpji5@yuggoth.org> Message-ID: Interesting survey but if i am not wrong Fuel is end of life and they stopped development right? On Thu, Feb 1, 2018 at 2:21 PM, Jeremy Stanley wrote: > On 2018-02-01 19:15:51 +0400 (+0400), Fawaz Mohammed wrote: >> TripleO (Supported on CentOS and RHEL) and Juju (Supported on >> Ubuntu) [1] are the most used OpenStack deployment tools. > [...] > > Most used in what sense? That statistic seems to contradict the > results of the official OpenStack User Survey from April 2017 (page > 42) at least, which claims that more deployments used Ansible (45%), > Puppet (28%), Fuel (16%) and Chef (14%) than Juju (9%). TripleO > didn't even have enough responses on that question to rank. > > https://www.openstack.org/assets/survey/April2017SurveyReport.pdf > > -- > Jeremy Stanley > > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Post to : openstack at lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > From fungi at yuggoth.org Thu Feb 1 20:55:14 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 1 Feb 2018 20:55:14 +0000 Subject: [Openstack] Openstack neutron with ASR1k In-Reply-To: References: <20180201192102.6tl6rtcuh2zzpji5@yuggoth.org> Message-ID: <20180201205513.jov3ighcyrbk4635@yuggoth.org> On 2018-02-01 14:58:46 -0500 (-0500), Satish Patel wrote: > Interesting survey but if i am not wrong Fuel is end of life and they > stopped development right? [...] Sure, but that doesn't mean people aren't still running environments deployed with it. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From doka.ua at gmx.com Thu Feb 1 22:49:21 2018 From: doka.ua at gmx.com (Volodymyr Litovka) Date: Fri, 2 Feb 2018 00:49:21 +0200 Subject: [Openstack] OpenVSwitch inside Instance no ARP passthrough In-Reply-To: <1b307fff14ee05267a5dad10216c3d04@projects.dfki.uni-kl.de> References: <19e2c014fb8d332bdb3518befce68a37@projects.dfki.uni-kl.de> <11ea9728-9446-2d8c-db3f-f5712e891af4@gmx.com> <9e663e326f138cf141d11964764388f1@projects.dfki.uni-kl.de> <7da0834a-12f7-ce79-db48-87c4058040cd@gmx.com> <1b307fff14ee05267a5dad10216c3d04@projects.dfki.uni-kl.de> Message-ID: <97e3c948-7745-ad7a-5465-2d2017189f35@gmx.com> Hi Mathias, I'm not so fluent with OVS, but I would recommend to join bridges using special "ports" like Port ovsbr1-patch Interface ovsbr1-patch type: patch options: {peer=ovsbr2-patch} and vice versa, keeping "native" configuration of "port OVSbr1" and "port OVSbr2" And keep in mind that ARP scope is broadcast domain and, if using just ARP (not routing), from VM1 you will be able to ping hosts, belonging to OVSbr1, particularly - OVSbr1's IP. On 2/1/18 4:11 PM, Mathias Strufe (DFKI) wrote: > Dear Benjamin, Volodymyr, > > good question ;) ... I like to experiment with some kind of "Firewall > NFV" ... but in the first step, I want to build a Router VM between > two networks (and later extend it with some flow rules) ... OpenStack, > in my case, is more a foundation to build a "test environment" for my > "own" application ... please find attached a quick sketch of the > current network ... > I did this already before with iptables inside the middle instance ... > worked quite well ... but know I like to achieve the same with OVS ... > I didn't expect that it is so much more difficult ;) ... > > I'm currently checking Volodymyrs answer ... I think first point is > now solved ... I "patched" now OVSbr1 and OVSbr2 inside the VM > together (see OVpatch file)... but I think this is important later > when I really like to ping from VM1 to VM2 ... but in the moment I > only ping from VM1 to the TestNFV ... but the arp requests only > reaches ens4 but not OVSbr1 (according to tcpdump)... > > May it have to do with port security and the (for OpenStack) unknown > MAC address of the OVS bridge? > > Thanks so far ... > > Mathias. > > > > > > On 2018-02-01 14:28, Benjamin Diaz wrote: >> Dear Mathias, >> >> Could you attach a diagram of your network configuration and of what >> you are trying to achieve? >> Are you trying to install OVS inside a VM? If so, why? >> >> Greetings, >> Benjamin >> >> On Thu, Feb 1, 2018 at 8:30 AM, Volodymyr Litovka >> wrote: >> >>> Dear Mathias, >>> >>> if I correctly understand your configuration, you're using bridges >>> inside VM and it configuration looks a bit strange: >>> >>> 1) you use two different bridges (OVSbr1/192.168.120.x and >>> OVSbr2/192.168.110.x) and there is no patch between them so they're >>> separate >>> 2) while ARP requests for address in OVSbr1 arrives from OVSbr2: >>> >>>> 18:50:58.080478 ARP, Request who-has 192.168.120.10 tell >>> 192.168.120.6, length 28 >>>> >>>> but on the OVS bridge nothing arrives ... >>>> >>>> listening on OVSBR2, link-type EN10MB (Ethernet), capture size >>>> 262144 bytes >>> >>> while these bridges are separate, ARP requests and answers will not >>> be passed between them. >>> >>> Regarding your devstack configuration - unfortunately, I don't have >>> experience with devstack, so don't know, where it stores configs. In >>> Openstack, ml2_conf.ini points to openvswitch in ml2's >>> mechanism_drivers parameter, in my case it looks as the following: >>> >>> [ml2] >>> mechanism_drivers = l2population,openvswitch >>> >>> and rest of openvswitch config described in >>> /etc/neutron/plugins/ml2/openvswitch_agent.ini >>> >>> Second - I see an ambiguity in your br-tun configuration, where >>> patch_int is the same as patch-int without corresponding remote peer >>> config, probably you should check this issue. >>> >>> And third is - note that Mitaka is quite old release and probably >>> you can give a chance for the latest release of devstack? :-) >>> >>> On 1/31/18 10:49 PM, Mathias Strufe (DFKI) wrote: >>> Dear Volodymyr, all, >>> >>> thanks for your fast answer ... >>> but I'm still facing the same problem, still can't ping the >>> instance with configured and up OVS bridge ... may because I'm quite >>> new to OpenStack and OpenVswitch and didn't see the problem ;) >>> >>> My setup is devstack Mitaka in single machine config ... first of >>> all I didn't find there the openvswitch_agent.ini anymore, I >>> remember in previous version it was in the neutron/plugin folder ... >>> >>> Is this config now done in the ml2 config file in the [OVS] >>> section???? >>> >>> I'm really wondering ... >>> so I can ping between the 2 instances without any problem. But as >>> soon I bring up the OVS bridge inside the vm the ARP requests only >>> visible at the ens interface but not reaching the OVSbr ... >>> >>> please find attached two files which may help for troubleshooting. >>> One are some network information from inside the Instance that runs >>> the OVS and one ovs-vsctl info of the OpenStack Host. >>> >>> If you need more info/logs please let me know! Thanks for your >>> help! >>> >>> BR Mathias. >>> >>> On 2018-01-27 22:44, Volodymyr Litovka wrote: >>> Hi Mathias, >>> >>> whether you have all corresponding bridges and patches between >>> them >>> as described in openvswitch_agent.ini using >>> >>> integration_bridge >>> tunnel_bridge >>> int_peer_patch_port >>> tun_peer_patch_port >>> bridge_mappings >>> >>> parameters? And make sure, that service "neutron-ovs-cleanup" is >>> in >>> use during system boot. You can check these bridges and patches >>> using >>> "ovs-vsctl show" command. >>> >>> On 1/27/18 9:00 PM, Mathias Strufe (DFKI) wrote: >>> >>> Dear all, >>> >>> I'm quite new to openstack and like to install openVSwtich inside >>> one Instance of our Mitika openstack Lab Enviornment ... >>> But it seems that ARP packets got lost between the network >>> interface of the instance and the OVS bridge ... >>> >>> With tcpdump on the interface I see the APR packets ... >>> >>> tcpdump: verbose output suppressed, use -v or -vv for full protocol >>> >>> decode >>> listening on ens6, link-type EN10MB (Ethernet), capture size 262144 >>> >>> bytes >>> 18:50:58.080478 ARP, Request who-has 192.168.120.10 tell >>> 192.168.120.6, length 28 >>> 18:50:58.125009 ARP, Request who-has 192.168.120.1 tell >>> 192.168.120.6, length 28 >>> 18:50:59.077315 ARP, Request who-has 192.168.120.10 tell >>> 192.168.120.6, length 28 >>> 18:50:59.121369 ARP, Request who-has 192.168.120.1 tell >>> 192.168.120.6, length 28 >>> 18:51:00.077327 ARP, Request who-has 192.168.120.10 tell >>> 192.168.120.6, length 28 >>> 18:51:00.121343 ARP, Request who-has 192.168.120.1 tell >>> 192.168.120.6, length 28 >>> >>> but on the OVS bridge nothing arrives ... >>> >>> tcpdump: verbose output suppressed, use -v or -vv for full protocol >>> >>> decode >>> listening on OVSbr2, link-type EN10MB (Ethernet), capture size >>> 262144 bytes >>> >>> I disabled port_security and removed the security group but nothing >>> >>> changed >>> >>> >> +-----------------------+---------------------------------------------------------------------------------------+ >> >>> >>> >>> | Field | Value >>> | >>> >>> >> +-----------------------+---------------------------------------------------------------------------------------+ >> >>> >>> >>> | admin_state_up | True >>> | >>> | allowed_address_pairs | >>> | >>> | binding:host_id | node11 >>> | >>> | binding:profile | {} >>> | >>> | binding:vif_details | {"port_filter": true, "ovs_hybrid_plug": >>> true} | >>> | binding:vif_type | ovs >>> | >>> | binding:vnic_type | normal >>> | >>> | created_at | 2018-01-27T16:45:48Z >>> | >>> | description | >>> | >>> | device_id | 74916967-984c-4617-ae33-b847de73de13 >>> | >>> | device_owner | compute:nova >>> | >>> | extra_dhcp_opts | >>> | >>> | fixed_ips | {"subnet_id": >>> "525db7ff-2bf2-4c64-b41e-1e41570ec358", "ip_address": >>> "192.168.120.10"} | >>> | id | 74b754d6-0000-4c2e-bfd1-87f640154ac9 >>> | >>> | mac_address | fa:16:3e:af:90:0c >>> | >>> | name | >>> | >>> | network_id | 917254cb-9721-4207-99c5-8ead9f95d186 >>> | >>> | port_security_enabled | False >>> | >>> | project_id | c48457e73b664147a3d2d36d75dcd155 >>> | >>> | revision_number | 27 >>> | >>> | security_groups | >>> | >>> | status | ACTIVE >>> | >>> | tenant_id | c48457e73b664147a3d2d36d75dcd155 >>> | >>> | updated_at | 2018-01-27T18:54:24Z >>> | >>> >>> >> +-----------------------+---------------------------------------------------------------------------------------+ >> >>> >>> >>> maybe the port_filter causes still the problem? But how to disable >>> it? >>> >>> Any other idea? >>> >>> Thanks and BR Mathias. >>> >>> _______________________________________________ >>> Mailing list: >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack [1] >>> [1] >>> Post to : openstack at lists.openstack.org >>> Unsubscribe : >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack [1] >>> [1] >>> >>> -- >>> Volodymyr Litovka >>> "Vision without Execution is Hallucination." -- Thomas Edison >>> >>> Links: >>> ------ >>> [1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >>> [1] >> >> -- >> Volodymyr Litovka >>  "Vision without Execution is Hallucination." -- Thomas Edison >> >> _______________________________________________ >>  Mailing list: >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack [1] >>  Post to     : openstack at lists.openstack.org >>  Unsubscribe : >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack [1] >> >> -- >> >> BENJAMÍN DÍAZ >> Cloud Computing Engineer >> >>  bdiaz at whitestack.com >> >> Links: >> ------ >> [1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > -- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison -------------- next part -------------- An HTML attachment was scrubbed... URL: From mathias.strufe at dfki.de Fri Feb 2 14:14:58 2018 From: mathias.strufe at dfki.de (Mathias Strufe (DFKI)) Date: Fri, 02 Feb 2018 15:14:58 +0100 Subject: [Openstack] OpenVSwitch inside Instance no ARP passthrough In-Reply-To: <97e3c948-7745-ad7a-5465-2d2017189f35@gmx.com> References: <19e2c014fb8d332bdb3518befce68a37@projects.dfki.uni-kl.de> <11ea9728-9446-2d8c-db3f-f5712e891af4@gmx.com> <9e663e326f138cf141d11964764388f1@projects.dfki.uni-kl.de> <7da0834a-12f7-ce79-db48-87c4058040cd@gmx.com> <1b307fff14ee05267a5dad10216c3d04@projects.dfki.uni-kl.de> <97e3c948-7745-ad7a-5465-2d2017189f35@gmx.com> Message-ID: <084f35e8025cfc8cdab0e0950cf94ef7@projects.dfki.uni-kl.de> Dear Volodymyr, Benjamin, thanks a lot for your tipps and patience ... but still facing the same problem :/ So I need to bother you again ... I think its something totally stupid, basic I do wrong ... Let me summarize what I did so far: - Update OpenStack to pike (devstack All in Single VM using default local.conf) [following this https://docs.openstack.org/devstack/latest/index.html] - Set prevent_arp_spoofing = False in ml2_config.ini - Disable Port Security of the TestNFV +-----------------------+------------------------------------------------------------------------------+ | Field | Value | +-----------------------+------------------------------------------------------------------------------+ | admin_state_up | UP | | allowed_address_pairs | | | binding_host_id | None | | binding_profile | None | | binding_vif_details | None | | binding_vif_type | None | | binding_vnic_type | normal | | created_at | 2018-01-31T07:50:40Z | | data_plane_status | None | | description | | | device_id | 97101c9b-c5ea-47f5-aa50-4a6ffa06c2a2 | | device_owner | compute:nova | | dns_assignment | None | | dns_name | None | | extra_dhcp_opts | | | fixed_ips | ip_address='192.168.120.5', subnet_id='b88f21e0-55ce-482f-8755-87a431f43e52' | | id | 5e97ea14-2555-44fc-bbfa-61877e93ae69 | | ip_address | None | | mac_address | fa:16:3e:55:80:84 | | name | | | network_id | 67572da9-c1e1-4330-84f6-79b64225c070 | | option_name | None | | option_value | None | | port_security_enabled | False | | project_id | ec8680e914a540e59d9d84dec8101ba5 | | qos_policy_id | None | | revision_number | 56 | | security_group_ids | | | status | ACTIVE | | subnet_id | None | | tags | | | trunk_details | None | | updated_at | 2018-02-02T13:40:26Z | +-----------------------+------------------------------------------------------------------------------+ In this state everything works fine and as expected ... I can ping from VM1 (192.168.120.10) to Test NVF VM (192.168.120.5) and get a response ... I have access to the outside world ... BUT As soon as I bring the OVS up inside of the Test NVF ... now as Volodymyr proposed with a "special patch port" Database config: 59aca356-8f37-4c6f-8c9a-504c66c65648 Bridge "OVSbr2" Controller "tcp:192.168.53.49:6633" is_connected: true fail_mode: secure Port "OVSbr2-patch" Interface "OVSbr2-patch" type: patch options: {peer="OVSbr1-patch"} Port "OVSbr2" Interface "OVSbr2" type: internal Port "ens5" Interface "ens5" Bridge "OVSbr1" Controller "tcp:192.168.53.49:6633" is_connected: true fail_mode: secure Port "OVSbr1" Interface "OVSbr1" type: internal Port "OVSbr1-patch" Interface "OVSbr1-patch" type: patch options: {peer="OVSbr2-patch"} Port "ens4" Interface "ens4" ovs_version: "2.5.2" the ping stops ... and again with tcpdump I can only see ARP requests on ens4 but not on the OVSbr1 bridge ... But I see now some LLDP packets on the ens4 and OVSbr1 .... Then I tried following ... I stopped the ping from source to TestNFVVM And start pinging from the TestNFV (192.168.120.5) to the Source (192.168.120.10) Again I didnt get any response ... And again looked at the tcpdump of OVSbr1 and ens4 ... tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on OVSbr1, link-type EN10MB (Ethernet), capture size 262144 bytes 13:59:18.245528 IP 192.168.120.5 > 192.168.120.10: ICMP echo request, id 1839, seq 286, length 64 13:59:19.253513 IP 192.168.120.5 > 192.168.120.10: ICMP echo request, id 1839, seq 287, length 64 13:59:20.261487 IP 192.168.120.5 > 192.168.120.10: ICMP echo request, id 1839, seq 288, length 64 13:59:21.269499 IP 192.168.120.5 > 192.168.120.10: ICMP echo request, id 1839, seq 289, length 64 13:59:21.680458 LLDP, length 110: openflow:214083694506308 13:59:22.277524 IP 192.168.120.5 > 192.168.120.10: ICMP echo request, id 1839, seq 290, length 64 13:59:23.285531 IP 192.168.120.5 > 192.168.120.10: ICMP echo request, id 1839, seq 291, length 64 13:59:24.293631 IP 192.168.120.5 > 192.168.120.10: ICMP echo request, id 1839, seq 292, length 64 13:59:25.301529 IP 192.168.120.5 > 192.168.120.10: ICMP echo request, id 1839, seq 293, length 64 13:59:26.309588 IP 192.168.120.5 > 192.168.120.10: ICMP echo request, id 1839, seq 294, length 64 13:59:26.680238 LLDP, length 110: openflow:214083694506308 13:59:27.317591 IP 192.168.120.5 > 192.168.120.10: ICMP echo request, id 1839, seq 295, length 64 13:59:28.325524 IP 192.168.120.5 > 192.168.120.10: ICMP echo request, id 1839, seq 296, length 64 13:59:29.333618 IP 192.168.120.5 > 192.168.120.10: ICMP echo request, id 1839, seq 297, length 64 13:59:30.341515 IP 192.168.120.5 > 192.168.120.10: ICMP echo request, id 1839, seq 298, length 64 tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on ens4, link-type EN10MB (Ethernet), capture size 262144 bytes 13:59:16.680452 LLDP, length 99: openflow:214083694506308 13:59:21.680791 LLDP, length 99: openflow:214083694506308 13:59:26.680532 LLDP, length 99: openflow:214083694506308 13:59:31.680503 LLDP, length 99: openflow:214083694506308 13:59:36.680681 LLDP, length 99: openflow:214083694506308 13:59:41.391777 ARP, Request who-has 192.168.120.10 tell 192.168.120.5, length 28 13:59:41.392345 ARP, Reply 192.168.120.10 is-at fa:16:3e:84:5c:29 (oui Unknown), length 28 13:59:41.680626 LLDP, length 99: openflow:214083694506308 13:59:46.680692 LLDP, length 99: openflow:214083694506308 This is a bit confusing for me ... First why does the echo request only appear on the OVSbr1 bridge and not also on the ens4 ... is this correct behaviour? and second why I got suddenly a ARP reply on ens4 which is indeed the correct mac of the VM1 interface ... and why the LLDP packets shown on both interfaces ... Is now something wrong with the FlowController? I use ODL with odl-l2switch-all feature enabled ... puhhh ... what do I miss??? I didn't get this ... Thx a lot Mathias. On 2018-02-01 23:49, Volodymyr Litovka wrote: > Hi Mathias, > > I'm not so fluent with OVS, but I would recommend to join bridges > using special "ports" like > > Port ovsbr1-patch > Interface ovsbr1-patch > type: patch > options: {peer=ovsbr2-patch} > and vice versa, keeping "native" configuration of "port OVSbr1" and > "port OVSbr2" > > And keep in mind that ARP scope is broadcast domain and, if using > just ARP (not routing), from VM1 you will be able to ping hosts, > belonging to OVSbr1, particularly - OVSbr1's IP. > > On 2/1/18 4:11 PM, Mathias Strufe (DFKI) wrote: > >> Dear Benjamin, Volodymyr, >> >> good question ;) ... I like to experiment with some kind of >> "Firewall NFV" ... but in the first step, I want to build a Router >> VM between two networks (and later extend it with some flow rules) >> ... OpenStack, in my case, is more a foundation to build a "test >> environment" for my "own" application ... please find attached a >> quick sketch of the current network ... >> I did this already before with iptables inside the middle instance >> ... worked quite well ... but know I like to achieve the same with >> OVS ... >> I didn't expect that it is so much more difficult ;) ... >> >> I'm currently checking Volodymyrs answer ... I think first point is >> now solved ... I "patched" now OVSbr1 and OVSbr2 inside the VM >> together (see OVpatch file)... but I think this is important later >> when I really like to ping from VM1 to VM2 ... but in the moment I >> only ping from VM1 to the TestNFV ... but the arp requests only >> reaches ens4 but not OVSbr1 (according to tcpdump)... >> >> May it have to do with port security and the (for OpenStack) >> unknown MAC address of the OVS bridge? >> >> Thanks so far ... >> >> Mathias. >> >> On 2018-02-01 14:28, Benjamin Diaz wrote: >> Dear Mathias, >> >> Could you attach a diagram of your network configuration and of >> what >> you are trying to achieve? >> Are you trying to install OVS inside a VM? If so, why? >> >> Greetings, >> Benjamin >> >> On Thu, Feb 1, 2018 at 8:30 AM, Volodymyr Litovka >> >> wrote: >> >> Dear Mathias, >> >> if I correctly understand your configuration, you're using bridges >> inside VM and it configuration looks a bit strange: >> >> 1) you use two different bridges (OVSbr1/192.168.120.x and >> OVSbr2/192.168.110.x) and there is no patch between them so they're >> >> separate >> 2) while ARP requests for address in OVSbr1 arrives from OVSbr2: >> >> 18:50:58.080478 ARP, Request who-has 192.168.120.10 tell >> 192.168.120.6, length 28 >> >> but on the OVS bridge nothing arrives ... >> >> listening on OVSBR2, link-type EN10MB (Ethernet), capture size >> 262144 bytes >> >> while these bridges are separate, ARP requests and answers will not >> >> be passed between them. >> >> Regarding your devstack configuration - unfortunately, I don't have >> >> experience with devstack, so don't know, where it stores configs. >> In >> Openstack, ml2_conf.ini points to openvswitch in ml2's >> mechanism_drivers parameter, in my case it looks as the following: >> >> [ml2] >> mechanism_drivers = l2population,openvswitch >> >> and rest of openvswitch config described in >> /etc/neutron/plugins/ml2/openvswitch_agent.ini >> >> Second - I see an ambiguity in your br-tun configuration, where >> patch_int is the same as patch-int without corresponding remote >> peer >> config, probably you should check this issue. >> >> And third is - note that Mitaka is quite old release and probably >> you can give a chance for the latest release of devstack? :-) >> >> On 1/31/18 10:49 PM, Mathias Strufe (DFKI) wrote: >> Dear Volodymyr, all, >> >> thanks for your fast answer ... >> but I'm still facing the same problem, still can't ping the >> instance with configured and up OVS bridge ... may because I'm >> quite >> new to OpenStack and OpenVswitch and didn't see the problem ;) >> >> My setup is devstack Mitaka in single machine config ... first of >> all I didn't find there the openvswitch_agent.ini anymore, I >> remember in previous version it was in the neutron/plugin folder >> ... >> >> Is this config now done in the ml2 config file in the [OVS] >> section???? >> >> I'm really wondering ... >> so I can ping between the 2 instances without any problem. But as >> soon I bring up the OVS bridge inside the vm the ARP requests only >> visible at the ens interface but not reaching the OVSbr ... >> >> please find attached two files which may help for troubleshooting. >> One are some network information from inside the Instance that runs >> >> the OVS and one ovs-vsctl info of the OpenStack Host. >> >> If you need more info/logs please let me know! Thanks for your >> help! >> >> BR Mathias. >> >> On 2018-01-27 22:44, Volodymyr Litovka wrote: >> Hi Mathias, >> >> whether you have all corresponding bridges and patches between >> them >> as described in openvswitch_agent.ini using >> >> integration_bridge >> tunnel_bridge >> int_peer_patch_port >> tun_peer_patch_port >> bridge_mappings >> >> parameters? And make sure, that service "neutron-ovs-cleanup" is >> in >> use during system boot. You can check these bridges and patches >> using >> "ovs-vsctl show" command. >> >> On 1/27/18 9:00 PM, Mathias Strufe (DFKI) wrote: >> >> Dear all, >> >> I'm quite new to openstack and like to install openVSwtich inside >> one Instance of our Mitika openstack Lab Enviornment ... >> But it seems that ARP packets got lost between the network >> interface of the instance and the OVS bridge ... >> >> With tcpdump on the interface I see the APR packets ... >> >> tcpdump: verbose output suppressed, use -v or -vv for full protocol >> >> >> decode >> listening on ens6, link-type EN10MB (Ethernet), capture size 262144 >> >> >> bytes >> 18:50:58.080478 ARP, Request who-has 192.168.120.10 tell >> 192.168.120.6, length 28 >> 18:50:58.125009 ARP, Request who-has 192.168.120.1 tell >> 192.168.120.6, length 28 >> 18:50:59.077315 ARP, Request who-has 192.168.120.10 tell >> 192.168.120.6, length 28 >> 18:50:59.121369 ARP, Request who-has 192.168.120.1 tell >> 192.168.120.6, length 28 >> 18:51:00.077327 ARP, Request who-has 192.168.120.10 tell >> 192.168.120.6, length 28 >> 18:51:00.121343 ARP, Request who-has 192.168.120.1 tell >> 192.168.120.6, length 28 >> >> but on the OVS bridge nothing arrives ... >> >> tcpdump: verbose output suppressed, use -v or -vv for full protocol >> >> >> decode >> listening on OVSbr2, link-type EN10MB (Ethernet), capture size >> 262144 bytes >> >> I disabled port_security and removed the security group but nothing >> >> >> changed > > +-----------------------+---------------------------------------------------------------------------------------+ > > >> | Field | Value >> | > > +-----------------------+---------------------------------------------------------------------------------------+ > > >> | admin_state_up | True >> | >> | allowed_address_pairs | >> | >> | binding:host_id | node11 >> | >> | binding:profile | {} >> | >> | binding:vif_details | {"port_filter": true, "ovs_hybrid_plug": >> true} | >> | binding:vif_type | ovs >> | >> | binding:vnic_type | normal >> | >> | created_at | 2018-01-27T16:45:48Z >> | >> | description | >> | >> | device_id | 74916967-984c-4617-ae33-b847de73de13 >> | >> | device_owner | compute:nova >> | >> | extra_dhcp_opts | >> | >> | fixed_ips | {"subnet_id": >> "525db7ff-2bf2-4c64-b41e-1e41570ec358", "ip_address": >> "192.168.120.10"} | >> | id | 74b754d6-0000-4c2e-bfd1-87f640154ac9 >> | >> | mac_address | fa:16:3e:af:90:0c >> | >> | name | >> | >> | network_id | 917254cb-9721-4207-99c5-8ead9f95d186 >> | >> | port_security_enabled | False >> | >> | project_id | c48457e73b664147a3d2d36d75dcd155 >> | >> | revision_number | 27 >> | >> | security_groups | >> | >> | status | ACTIVE >> | >> | tenant_id | c48457e73b664147a3d2d36d75dcd155 >> | >> | updated_at | 2018-01-27T18:54:24Z >> | > > +-----------------------+---------------------------------------------------------------------------------------+ > > >> maybe the port_filter causes still the problem? But how to disable >> it? >> >> Any other idea? >> >> Thanks and BR Mathias. >> >> _______________________________________________ >> Mailing list: >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack [1] >> [1] >> [1] >> Post to : openstack at lists.openstack.org >> Unsubscribe : >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack [1] >> [1] >> [1] >> >> -- >> Volodymyr Litovka >> "Vision without Execution is Hallucination." -- Thomas Edison >> >> Links: >> ------ >> [1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >> [1] >> [1] > > -- > Volodymyr Litovka > "Vision without Execution is Hallucination." -- Thomas Edison > > _______________________________________________ > Mailing list: > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack [1] [1] > > Post to : openstack at lists.openstack.org > Unsubscribe : > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack [1] [1] > > > -- > > BENJAMÍN DÍAZ > Cloud Computing Engineer > > bdiaz at whitestack.com > > Links: > ------ > [1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack [1] > > > -- > Volodymyr Litovka > "Vision without Execution is Hallucination." -- Thomas Edison > > > Links: > ------ > [1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack -- Vielen Dank und Gruß Mathias. Many Thanks and kind regards, Mathias. -- Dipl.-Ing. (FH) Mathias Strufe Wissenschaftlicher Mitarbeiter / Researcher Intelligente Netze / Intelligent Networks Phone: +49 (0) 631 205 75 - 1826 Fax: +49 (0) 631 205 75 – 4400 E-Mail: Mathias.Strufe at dfki.de WWW: http://www.dfki.de/web/forschung/in WWW: https://selfnet-5g.eu/ -------------------------------------------------------------- Deutsches Forschungszentrum fuer Kuenstliche Intelligenz GmbH Trippstadter Strasse 122 D-67663 Kaiserslautern, Germany Geschaeftsfuehrung: Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) Dr. Walter Olthoff Vorsitzender des Aufsichtsrats: Prof. Dr. h.c. Hans A. Aukes Amtsgericht Kaiserslautern, HRB 2313 VAT-ID: DE 148 646 973 -------------------------------------------------------------- From doka.ua at gmx.com Fri Feb 2 14:32:44 2018 From: doka.ua at gmx.com (Volodymyr Litovka) Date: Fri, 2 Feb 2018 16:32:44 +0200 Subject: [Openstack] OpenVSwitch inside Instance no ARP passthrough In-Reply-To: <084f35e8025cfc8cdab0e0950cf94ef7@projects.dfki.uni-kl.de> References: <19e2c014fb8d332bdb3518befce68a37@projects.dfki.uni-kl.de> <11ea9728-9446-2d8c-db3f-f5712e891af4@gmx.com> <9e663e326f138cf141d11964764388f1@projects.dfki.uni-kl.de> <7da0834a-12f7-ce79-db48-87c4058040cd@gmx.com> <1b307fff14ee05267a5dad10216c3d04@projects.dfki.uni-kl.de> <97e3c948-7745-ad7a-5465-2d2017189f35@gmx.com> <084f35e8025cfc8cdab0e0950cf94ef7@projects.dfki.uni-kl.de> Message-ID: <8f290220-b54f-97a5-5cd2-9d24600889e2@gmx.com> Hi Mathias, the fact that you've seen ARP request-reply says that connectivity itself is correct. I think the problem with flows configuration inside bridge, which is controlled by ODL. Unfortunately, I never had an experience with ODL and can't comment what it do and how. You can print flows config using command ovs-ofctl dump-flows and there you can try to find whether some rules block some traffic. On 2/2/18 4:14 PM, Mathias Strufe (DFKI) wrote: > Dear Volodymyr, Benjamin, > > thanks a lot for your tipps and patience ... but still facing the same > problem :/ > So I need to bother you again ... > I think its something totally stupid, basic I do wrong ... > > Let me summarize what I did so far: > > - Update OpenStack to pike (devstack All in Single VM using default > local.conf) >   [following this https://docs.openstack.org/devstack/latest/index.html] > - Set prevent_arp_spoofing = False in ml2_config.ini > - Disable Port Security of the TestNFV > > +-----------------------+------------------------------------------------------------------------------+ > > | Field                 | Value                               | > +-----------------------+------------------------------------------------------------------------------+ > > | admin_state_up        | UP                               | > | allowed_address_pairs |                               | > | binding_host_id       | None                               | > | binding_profile       | None                               | > | binding_vif_details   | None                               | > | binding_vif_type      | None                               | > | binding_vnic_type     | normal                               | > | created_at            | 2018-01-31T07:50:40Z >                               | > | data_plane_status     | None                               | > | description |                               | > | device_id             | 97101c9b-c5ea-47f5-aa50-4a6ffa06c2a2 >                               | > | device_owner          | compute:nova                               | > | dns_assignment        | None                               | > | dns_name              | None                               | > | extra_dhcp_opts |                               | > | fixed_ips             | ip_address='192.168.120.5', > subnet_id='b88f21e0-55ce-482f-8755-87a431f43e52' | > | id                    | 5e97ea14-2555-44fc-bbfa-61877e93ae69 >                               | > | ip_address            | None                               | > | mac_address           | fa:16:3e:55:80:84 >                               | > | name |                               | > | network_id            | 67572da9-c1e1-4330-84f6-79b64225c070 >                               | > | option_name           | None                               | > | option_value          | None                               | > | port_security_enabled | False                               | > | project_id            | ec8680e914a540e59d9d84dec8101ba5 >                               | > | qos_policy_id         | None                               | > | revision_number       | 56                               | > | security_group_ids |                               | > | status                | ACTIVE                               | > | subnet_id             | None                               | > | tags |                               | > | trunk_details         | None                               | > | updated_at            | 2018-02-02T13:40:26Z >                               | > +-----------------------+------------------------------------------------------------------------------+ > > > In this state everything works fine and as expected ... I can ping > from VM1 (192.168.120.10) to Test NVF VM (192.168.120.5) and get a > response ... I have access to the outside world ... > > BUT > > As soon as I bring the OVS up inside of the Test NVF ... now as > Volodymyr proposed with a "special patch port" > > Database config: > 59aca356-8f37-4c6f-8c9a-504c66c65648 >     Bridge "OVSbr2" >         Controller "tcp:192.168.53.49:6633" >             is_connected: true >         fail_mode: secure >         Port "OVSbr2-patch" >             Interface "OVSbr2-patch" >                 type: patch >                 options: {peer="OVSbr1-patch"} >         Port "OVSbr2" >             Interface "OVSbr2" >                 type: internal >         Port "ens5" >             Interface "ens5" >     Bridge "OVSbr1" >         Controller "tcp:192.168.53.49:6633" >             is_connected: true >         fail_mode: secure >         Port "OVSbr1" >             Interface "OVSbr1" >                 type: internal >         Port "OVSbr1-patch" >             Interface "OVSbr1-patch" >                 type: patch >                 options: {peer="OVSbr2-patch"} >         Port "ens4" >             Interface "ens4" >     ovs_version: "2.5.2" > > > the ping stops ... and again with tcpdump I can only see ARP requests > on ens4 but not on the OVSbr1 bridge ... > > But I see now some LLDP packets on the ens4 and OVSbr1 .... > > Then I tried following ... I stopped the ping from source to TestNFVVM > And start pinging from the TestNFV (192.168.120.5) to the Source > (192.168.120.10) > Again I didnt get any response ... > And again looked at the tcpdump of OVSbr1 and ens4 ... > > tcpdump: verbose output suppressed, use -v or -vv for full protocol > decode > listening on OVSbr1, link-type EN10MB (Ethernet), capture size 262144 > bytes > 13:59:18.245528 IP 192.168.120.5 > 192.168.120.10: ICMP echo request, > id 1839, seq 286, length 64 > 13:59:19.253513 IP 192.168.120.5 > 192.168.120.10: ICMP echo request, > id 1839, seq 287, length 64 > 13:59:20.261487 IP 192.168.120.5 > 192.168.120.10: ICMP echo request, > id 1839, seq 288, length 64 > 13:59:21.269499 IP 192.168.120.5 > 192.168.120.10: ICMP echo request, > id 1839, seq 289, length 64 > 13:59:21.680458 LLDP, length 110: openflow:214083694506308 > 13:59:22.277524 IP 192.168.120.5 > 192.168.120.10: ICMP echo request, > id 1839, seq 290, length 64 > 13:59:23.285531 IP 192.168.120.5 > 192.168.120.10: ICMP echo request, > id 1839, seq 291, length 64 > 13:59:24.293631 IP 192.168.120.5 > 192.168.120.10: ICMP echo request, > id 1839, seq 292, length 64 > 13:59:25.301529 IP 192.168.120.5 > 192.168.120.10: ICMP echo request, > id 1839, seq 293, length 64 > 13:59:26.309588 IP 192.168.120.5 > 192.168.120.10: ICMP echo request, > id 1839, seq 294, length 64 > 13:59:26.680238 LLDP, length 110: openflow:214083694506308 > 13:59:27.317591 IP 192.168.120.5 > 192.168.120.10: ICMP echo request, > id 1839, seq 295, length 64 > 13:59:28.325524 IP 192.168.120.5 > 192.168.120.10: ICMP echo request, > id 1839, seq 296, length 64 > 13:59:29.333618 IP 192.168.120.5 > 192.168.120.10: ICMP echo request, > id 1839, seq 297, length 64 > 13:59:30.341515 IP 192.168.120.5 > 192.168.120.10: ICMP echo request, > id 1839, seq 298, length 64 > > > > tcpdump: verbose output suppressed, use -v or -vv for full protocol > decode > listening on ens4, link-type EN10MB (Ethernet), capture size 262144 bytes > 13:59:16.680452 LLDP, length 99: openflow:214083694506308 > 13:59:21.680791 LLDP, length 99: openflow:214083694506308 > 13:59:26.680532 LLDP, length 99: openflow:214083694506308 > 13:59:31.680503 LLDP, length 99: openflow:214083694506308 > 13:59:36.680681 LLDP, length 99: openflow:214083694506308 > 13:59:41.391777 ARP, Request who-has 192.168.120.10 tell > 192.168.120.5, length 28 > 13:59:41.392345 ARP, Reply 192.168.120.10 is-at fa:16:3e:84:5c:29 (oui > Unknown), length 28 > 13:59:41.680626 LLDP, length 99: openflow:214083694506308 > 13:59:46.680692 LLDP, length 99: openflow:214083694506308 > > > > This is a bit confusing for me ... > First why does the echo request only appear on the OVSbr1 bridge and > not also on the ens4 ... is this correct behaviour? > and second why I got suddenly a ARP reply on ens4 which is indeed the > correct mac of the VM1 interface ... > and why the LLDP packets shown on both interfaces ... > > Is now something wrong with the FlowController? > I use ODL with odl-l2switch-all feature enabled ... > > puhhh ... what do I miss??? I didn't get this ... > > Thx a lot Mathias. > > > > > > > > On 2018-02-01 23:49, Volodymyr Litovka wrote: >> Hi Mathias, >> >>  I'm not so fluent with OVS, but I would recommend to join bridges >> using special "ports" like >> >> Port ovsbr1-patch >>  Interface ovsbr1-patch >>  type: patch >>  options: {peer=ovsbr2-patch} >>  and vice versa, keeping "native" configuration of "port OVSbr1" and >> "port OVSbr2" >> >>  And keep in mind that ARP scope is broadcast domain and, if using >> just ARP (not routing), from VM1 you will be able to ping hosts, >> belonging to OVSbr1, particularly - OVSbr1's IP. >> >> On 2/1/18 4:11 PM, Mathias Strufe (DFKI) wrote: >> >>> Dear Benjamin, Volodymyr, >>> >>> good question ;) ... I like to experiment with some kind of >>> "Firewall NFV" ... but in the first step, I want to build a Router >>> VM between two networks (and later extend it with some flow rules) >>> ... OpenStack, in my case, is more a foundation to build a "test >>> environment" for my "own" application ... please find attached a >>> quick sketch of the current network ... >>> I did this already before with iptables inside the middle instance >>> ... worked quite well ... but know I like to achieve the same with >>> OVS ... >>> I didn't expect that it is so much more difficult ;) ... >>> >>> I'm currently checking Volodymyrs answer ... I think first point is >>> now solved ... I "patched" now OVSbr1 and OVSbr2 inside the VM >>> together (see OVpatch file)... but I think this is important later >>> when I really like to ping from VM1 to VM2 ... but in the moment I >>> only ping from VM1 to the TestNFV ... but the arp requests only >>> reaches ens4 but not OVSbr1 (according to tcpdump)... >>> >>> May it have to do with port security and the (for OpenStack) >>> unknown MAC address of the OVS bridge? >>> >>> Thanks so far ... >>> >>> Mathias. >>> >>> On 2018-02-01 14:28, Benjamin Diaz wrote: >>> Dear Mathias, >>> >>> Could you attach a diagram of your network configuration and of >>> what >>> you are trying to achieve? >>> Are you trying to install OVS inside a VM? If so, why? >>> >>> Greetings, >>> Benjamin >>> >>> On Thu, Feb 1, 2018 at 8:30 AM, Volodymyr Litovka >>> >>> wrote: >>> >>> Dear Mathias, >>> >>> if I correctly understand your configuration, you're using bridges >>> inside VM and it configuration looks a bit strange: >>> >>> 1) you use two different bridges (OVSbr1/192.168.120.x and >>> OVSbr2/192.168.110.x) and there is no patch between them so they're >>> >>> separate >>> 2) while ARP requests for address in OVSbr1 arrives from OVSbr2: >>> >>> 18:50:58.080478 ARP, Request who-has 192.168.120.10 tell >>> 192.168.120.6, length 28 >>> >>> but on the OVS bridge nothing arrives ... >>> >>> listening on OVSBR2, link-type EN10MB (Ethernet), capture size >>> 262144 bytes >>> >>> while these bridges are separate, ARP requests and answers will not >>> >>> be passed between them. >>> >>> Regarding your devstack configuration - unfortunately, I don't have >>> >>> experience with devstack, so don't know, where it stores configs. >>> In >>> Openstack, ml2_conf.ini points to openvswitch in ml2's >>> mechanism_drivers parameter, in my case it looks as the following: >>> >>> [ml2] >>> mechanism_drivers = l2population,openvswitch >>> >>> and rest of openvswitch config described in >>> /etc/neutron/plugins/ml2/openvswitch_agent.ini >>> >>> Second - I see an ambiguity in your br-tun configuration, where >>> patch_int is the same as patch-int without corresponding remote >>> peer >>> config, probably you should check this issue. >>> >>> And third is - note that Mitaka is quite old release and probably >>> you can give a chance for the latest release of devstack? :-) >>> >>> On 1/31/18 10:49 PM, Mathias Strufe (DFKI) wrote: >>> Dear Volodymyr, all, >>> >>> thanks for your fast answer ... >>> but I'm still facing the same problem, still can't ping the >>> instance with configured and up OVS bridge ... may because I'm >>> quite >>> new to OpenStack and OpenVswitch and didn't see the problem ;) >>> >>> My setup is devstack Mitaka in single machine config ... first of >>> all I didn't find there the openvswitch_agent.ini anymore, I >>> remember in previous version it was in the neutron/plugin folder >>> ... >>> >>> Is this config now done in the ml2 config file in the [OVS] >>> section???? >>> >>> I'm really wondering ... >>> so I can ping between the 2 instances without any problem. But as >>> soon I bring up the OVS bridge inside the vm the ARP requests only >>> visible at the ens interface but not reaching the OVSbr ... >>> >>> please find attached two files which may help for troubleshooting. >>> One are some network information from inside the Instance that runs >>> >>> the OVS and one ovs-vsctl info of the OpenStack Host. >>> >>> If you need more info/logs please let me know! Thanks for your >>> help! >>> >>> BR Mathias. >>> >>> On 2018-01-27 22:44, Volodymyr Litovka wrote: >>> Hi Mathias, >>> >>> whether you have all corresponding bridges and patches between >>> them >>> as described in openvswitch_agent.ini using >>> >>> integration_bridge >>> tunnel_bridge >>> int_peer_patch_port >>> tun_peer_patch_port >>> bridge_mappings >>> >>> parameters? And make sure, that service "neutron-ovs-cleanup" is >>> in >>> use during system boot. You can check these bridges and patches >>> using >>> "ovs-vsctl show" command. >>> >>> On 1/27/18 9:00 PM, Mathias Strufe (DFKI) wrote: >>> >>> Dear all, >>> >>> I'm quite new to openstack and like to install openVSwtich inside >>> one Instance of our Mitika openstack Lab Enviornment ... >>> But it seems that ARP packets got lost between the network >>> interface of the instance and the OVS bridge ... >>> >>> With tcpdump on the interface I see the APR packets ... >>> >>> tcpdump: verbose output suppressed, use -v or -vv for full protocol >>> >>> >>> decode >>> listening on ens6, link-type EN10MB (Ethernet), capture size 262144 >>> >>> >>> bytes >>> 18:50:58.080478 ARP, Request who-has 192.168.120.10 tell >>> 192.168.120.6, length 28 >>> 18:50:58.125009 ARP, Request who-has 192.168.120.1 tell >>> 192.168.120.6, length 28 >>> 18:50:59.077315 ARP, Request who-has 192.168.120.10 tell >>> 192.168.120.6, length 28 >>> 18:50:59.121369 ARP, Request who-has 192.168.120.1 tell >>> 192.168.120.6, length 28 >>> 18:51:00.077327 ARP, Request who-has 192.168.120.10 tell >>> 192.168.120.6, length 28 >>> 18:51:00.121343 ARP, Request who-has 192.168.120.1 tell >>> 192.168.120.6, length 28 >>> >>> but on the OVS bridge nothing arrives ... >>> >>> tcpdump: verbose output suppressed, use -v or -vv for full protocol >>> >>> >>> decode >>> listening on OVSbr2, link-type EN10MB (Ethernet), capture size >>> 262144 bytes >>> >>> I disabled port_security and removed the security group but nothing >>> >>> >>> changed >> >> +-----------------------+---------------------------------------------------------------------------------------+ >> >> >> >>> | Field | Value >>> | >> >> +-----------------------+---------------------------------------------------------------------------------------+ >> >> >> >>> | admin_state_up | True >>> | >>> | allowed_address_pairs | >>> | >>> | binding:host_id | node11 >>> | >>> | binding:profile | {} >>> | >>> | binding:vif_details | {"port_filter": true, "ovs_hybrid_plug": >>> true} | >>> | binding:vif_type | ovs >>> | >>> | binding:vnic_type | normal >>> | >>> | created_at | 2018-01-27T16:45:48Z >>> | >>> | description | >>> | >>> | device_id | 74916967-984c-4617-ae33-b847de73de13 >>> | >>> | device_owner | compute:nova >>> | >>> | extra_dhcp_opts | >>> | >>> | fixed_ips | {"subnet_id": >>> "525db7ff-2bf2-4c64-b41e-1e41570ec358", "ip_address": >>> "192.168.120.10"} | >>> | id | 74b754d6-0000-4c2e-bfd1-87f640154ac9 >>> | >>> | mac_address | fa:16:3e:af:90:0c >>> | >>> | name | >>> | >>> | network_id | 917254cb-9721-4207-99c5-8ead9f95d186 >>> | >>> | port_security_enabled | False >>> | >>> | project_id | c48457e73b664147a3d2d36d75dcd155 >>> | >>> | revision_number | 27 >>> | >>> | security_groups | >>> | >>> | status | ACTIVE >>> | >>> | tenant_id | c48457e73b664147a3d2d36d75dcd155 >>> | >>> | updated_at | 2018-01-27T18:54:24Z >>> | >> >> +-----------------------+---------------------------------------------------------------------------------------+ >> >> >> >>> maybe the port_filter causes still the problem? But how to disable >>> it? >>> >>> Any other idea? >>> >>> Thanks and BR Mathias. >>> >>> _______________________________________________ >>> Mailing list: >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack [1] >>> [1] >>> [1] >>> Post to : openstack at lists.openstack.org >>> Unsubscribe : >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack [1] >>> [1] >>> [1] >>> >>> -- >>> Volodymyr Litovka >>> "Vision without Execution is Hallucination." -- Thomas Edison >>> >>> Links: >>> ------ >>> [1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >>> [1] >>> [1] >> >>  -- >>  Volodymyr Litovka >>   "Vision without Execution is Hallucination." -- Thomas Edison >> >>  _______________________________________________ >>   Mailing list: >>  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack [1] [1] >> >>   Post to     : openstack at lists.openstack.org >>   Unsubscribe : >>  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack [1] [1] >> >> >>  -- >> >>  BENJAMÍN DÍAZ >>  Cloud Computing Engineer >> >>   bdiaz at whitestack.com >> >>  Links: >>  ------ >>  [1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack [1] >> >> >> -- >> Volodymyr Litovka >>  "Vision without Execution is Hallucination." -- Thomas Edison >> >> >> Links: >> ------ >> [1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > -- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison -------------- next part -------------- An HTML attachment was scrubbed... URL: From mathias.strufe at dfki.de Fri Feb 2 16:42:11 2018 From: mathias.strufe at dfki.de (Mathias Strufe) Date: Fri, 2 Feb 2018 17:42:11 +0100 Subject: [Openstack] OpenVSwitch inside Instance no ARP passthrough In-Reply-To: <8f290220-b54f-97a5-5cd2-9d24600889e2@gmx.com> References: <19e2c014fb8d332bdb3518befce68a37@projects.dfki.uni-kl.de> <11ea9728-9446-2d8c-db3f-f5712e891af4@gmx.com> <9e663e326f138cf141d11964764388f1@projects.dfki.uni-kl.de> <7da0834a-12f7-ce79-db48-87c4058040cd@gmx.com> <1b307fff14ee05267a5dad10216c3d04@projects.dfki.uni-kl.de> <97e3c948-7745-ad7a-5465-2d2017189f35@gmx.com> <084f35e8025cfc8cdab0e0950cf94ef7@projects.dfki.uni-kl.de> <8f290220-b54f-97a5-5cd2-9d24600889e2@gmx.com> Message-ID: <012601d39c44$c89d0840$59d718c0$@dfki.de> It seems you are right … I setup quickly POX and its working … :o Oh dear … Thanks a lot!!! Have a nice weekend! Von: Volodymyr Litovka [mailto:doka.ua at gmx.com] Gesendet: Freitag, 2. Februar 2018 15:33 An: Mathias Strufe (DFKI) ; Benjamin Diaz ; OpenStack Mailing List Cc: doka.ua at gmx.com Betreff: Re: [Openstack] OpenVSwitch inside Instance no ARP passthrough Hi Mathias, the fact that you've seen ARP request-reply says that connectivity itself is correct. I think the problem with flows configuration inside bridge, which is controlled by ODL. Unfortunately, I never had an experience with ODL and can't comment what it do and how. You can print flows config using command ovs-ofctl dump-flows and there you can try to find whether some rules block some traffic. On 2/2/18 4:14 PM, Mathias Strufe (DFKI) wrote: Dear Volodymyr, Benjamin, thanks a lot for your tipps and patience ... but still facing the same problem :/ So I need to bother you again ... I think its something totally stupid, basic I do wrong ... Let me summarize what I did so far: - Update OpenStack to pike (devstack All in Single VM using default local.conf) [following this https://docs.openstack.org/devstack/latest/index.html] - Set prevent_arp_spoofing = False in ml2_config.ini - Disable Port Security of the TestNFV +-----------------------+------------------------------------------------------------------------------+ | Field | Value | +-----------------------+------------------------------------------------------------------------------+ | admin_state_up | UP | | allowed_address_pairs | | | binding_host_id | None | | binding_profile | None | | binding_vif_details | None | | binding_vif_type | None | | binding_vnic_type | normal | | created_at | 2018-01-31T07:50:40Z | | data_plane_status | None | | description | | | device_id | 97101c9b-c5ea-47f5-aa50-4a6ffa06c2a2 | | device_owner | compute:nova | | dns_assignment | None | | dns_name | None | | extra_dhcp_opts | | | fixed_ips | ip_address='192.168.120.5', subnet_id='b88f21e0-55ce-482f-8755-87a431f43e52' | | id | 5e97ea14-2555-44fc-bbfa-61877e93ae69 | | ip_address | None | | mac_address | fa:16:3e:55:80:84 | | name | | | network_id | 67572da9-c1e1-4330-84f6-79b64225c070 | | option_name | None | | option_value | None | | port_security_enabled | False | | project_id | ec8680e914a540e59d9d84dec8101ba5 | | qos_policy_id | None | | revision_number | 56 | | security_group_ids | | | status | ACTIVE | | subnet_id | None | | tags | | | trunk_details | None | | updated_at | 2018-02-02T13:40:26Z | +-----------------------+------------------------------------------------------------------------------+ In this state everything works fine and as expected ... I can ping from VM1 (192.168.120.10) to Test NVF VM (192.168.120.5) and get a response ... I have access to the outside world ... BUT As soon as I bring the OVS up inside of the Test NVF ... now as Volodymyr proposed with a "special patch port" Database config: 59aca356-8f37-4c6f-8c9a-504c66c65648 Bridge "OVSbr2" Controller "tcp:192.168.53.49:6633" is_connected: true fail_mode: secure Port "OVSbr2-patch" Interface "OVSbr2-patch" type: patch options: {peer="OVSbr1-patch"} Port "OVSbr2" Interface "OVSbr2" type: internal Port "ens5" Interface "ens5" Bridge "OVSbr1" Controller "tcp:192.168.53.49:6633" is_connected: true fail_mode: secure Port "OVSbr1" Interface "OVSbr1" type: internal Port "OVSbr1-patch" Interface "OVSbr1-patch" type: patch options: {peer="OVSbr2-patch"} Port "ens4" Interface "ens4" ovs_version: "2.5.2" the ping stops ... and again with tcpdump I can only see ARP requests on ens4 but not on the OVSbr1 bridge ... But I see now some LLDP packets on the ens4 and OVSbr1 .... Then I tried following ... I stopped the ping from source to TestNFVVM And start pinging from the TestNFV (192.168.120.5) to the Source (192.168.120.10) Again I didnt get any response ... And again looked at the tcpdump of OVSbr1 and ens4 ... tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on OVSbr1, link-type EN10MB (Ethernet), capture size 262144 bytes 13:59:18.245528 IP 192.168.120.5 > 192.168.120.10: ICMP echo request, id 1839, seq 286, length 64 13:59:19.253513 IP 192.168.120.5 > 192.168.120.10: ICMP echo request, id 1839, seq 287, length 64 13:59:20.261487 IP 192.168.120.5 > 192.168.120.10: ICMP echo request, id 1839, seq 288, length 64 13:59:21.269499 IP 192.168.120.5 > 192.168.120.10: ICMP echo request, id 1839, seq 289, length 64 13:59:21.680458 LLDP, length 110: openflow:214083694506308 13:59:22.277524 IP 192.168.120.5 > 192.168.120.10: ICMP echo request, id 1839, seq 290, length 64 13:59:23.285531 IP 192.168.120.5 > 192.168.120.10: ICMP echo request, id 1839, seq 291, length 64 13:59:24.293631 IP 192.168.120.5 > 192.168.120.10: ICMP echo request, id 1839, seq 292, length 64 13:59:25.301529 IP 192.168.120.5 > 192.168.120.10: ICMP echo request, id 1839, seq 293, length 64 13:59:26.309588 IP 192.168.120.5 > 192.168.120.10: ICMP echo request, id 1839, seq 294, length 64 13:59:26.680238 LLDP, length 110: openflow:214083694506308 13:59:27.317591 IP 192.168.120.5 > 192.168.120.10: ICMP echo request, id 1839, seq 295, length 64 13:59:28.325524 IP 192.168.120.5 > 192.168.120.10: ICMP echo request, id 1839, seq 296, length 64 13:59:29.333618 IP 192.168.120.5 > 192.168.120.10: ICMP echo request, id 1839, seq 297, length 64 13:59:30.341515 IP 192.168.120.5 > 192.168.120.10: ICMP echo request, id 1839, seq 298, length 64 tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on ens4, link-type EN10MB (Ethernet), capture size 262144 bytes 13:59:16.680452 LLDP, length 99: openflow:214083694506308 13:59:21.680791 LLDP, length 99: openflow:214083694506308 13:59:26.680532 LLDP, length 99: openflow:214083694506308 13:59:31.680503 LLDP, length 99: openflow:214083694506308 13:59:36.680681 LLDP, length 99: openflow:214083694506308 13:59:41.391777 ARP, Request who-has 192.168.120.10 tell 192.168.120.5, length 28 13:59:41.392345 ARP, Reply 192.168.120.10 is-at fa:16:3e:84:5c:29 (oui Unknown), length 28 13:59:41.680626 LLDP, length 99: openflow:214083694506308 13:59:46.680692 LLDP, length 99: openflow:214083694506308 This is a bit confusing for me ... First why does the echo request only appear on the OVSbr1 bridge and not also on the ens4 ... is this correct behaviour? and second why I got suddenly a ARP reply on ens4 which is indeed the correct mac of the VM1 interface ... and why the LLDP packets shown on both interfaces ... Is now something wrong with the FlowController? I use ODL with odl-l2switch-all feature enabled ... puhhh ... what do I miss??? I didn't get this ... Thx a lot Mathias. On 2018-02-01 23:49, Volodymyr Litovka wrote: Hi Mathias, I'm not so fluent with OVS, but I would recommend to join bridges using special "ports" like Port ovsbr1-patch Interface ovsbr1-patch type: patch options: {peer=ovsbr2-patch} and vice versa, keeping "native" configuration of "port OVSbr1" and "port OVSbr2" And keep in mind that ARP scope is broadcast domain and, if using just ARP (not routing), from VM1 you will be able to ping hosts, belonging to OVSbr1, particularly - OVSbr1's IP. On 2/1/18 4:11 PM, Mathias Strufe (DFKI) wrote: Dear Benjamin, Volodymyr, good question ;) ... I like to experiment with some kind of "Firewall NFV" ... but in the first step, I want to build a Router VM between two networks (and later extend it with some flow rules) ... OpenStack, in my case, is more a foundation to build a "test environment" for my "own" application ... please find attached a quick sketch of the current network ... I did this already before with iptables inside the middle instance ... worked quite well ... but know I like to achieve the same with OVS ... I didn't expect that it is so much more difficult ;) ... I'm currently checking Volodymyrs answer ... I think first point is now solved ... I "patched" now OVSbr1 and OVSbr2 inside the VM together (see OVpatch file)... but I think this is important later when I really like to ping from VM1 to VM2 ... but in the moment I only ping from VM1 to the TestNFV ... but the arp requests only reaches ens4 but not OVSbr1 (according to tcpdump)... May it have to do with port security and the (for OpenStack) unknown MAC address of the OVS bridge? Thanks so far ... Mathias. On 2018-02-01 14:28, Benjamin Diaz wrote: Dear Mathias, Could you attach a diagram of your network configuration and of what you are trying to achieve? Are you trying to install OVS inside a VM? If so, why? Greetings, Benjamin On Thu, Feb 1, 2018 at 8:30 AM, Volodymyr Litovka wrote: Dear Mathias, if I correctly understand your configuration, you're using bridges inside VM and it configuration looks a bit strange: 1) you use two different bridges (OVSbr1/192.168.120.x and OVSbr2/192.168.110.x) and there is no patch between them so they're separate 2) while ARP requests for address in OVSbr1 arrives from OVSbr2: 18:50:58.080478 ARP, Request who-has 192.168.120.10 tell 192.168.120.6, length 28 but on the OVS bridge nothing arrives ... listening on OVSBR2, link-type EN10MB (Ethernet), capture size 262144 bytes while these bridges are separate, ARP requests and answers will not be passed between them. Regarding your devstack configuration - unfortunately, I don't have experience with devstack, so don't know, where it stores configs. In Openstack, ml2_conf.ini points to openvswitch in ml2's mechanism_drivers parameter, in my case it looks as the following: [ml2] mechanism_drivers = l2population,openvswitch and rest of openvswitch config described in /etc/neutron/plugins/ml2/openvswitch_agent.ini Second - I see an ambiguity in your br-tun configuration, where patch_int is the same as patch-int without corresponding remote peer config, probably you should check this issue. And third is - note that Mitaka is quite old release and probably you can give a chance for the latest release of devstack? :-) On 1/31/18 10:49 PM, Mathias Strufe (DFKI) wrote: Dear Volodymyr, all, thanks for your fast answer ... but I'm still facing the same problem, still can't ping the instance with configured and up OVS bridge ... may because I'm quite new to OpenStack and OpenVswitch and didn't see the problem ;) My setup is devstack Mitaka in single machine config ... first of all I didn't find there the openvswitch_agent.ini anymore, I remember in previous version it was in the neutron/plugin folder ... Is this config now done in the ml2 config file in the [OVS] section???? I'm really wondering ... so I can ping between the 2 instances without any problem. But as soon I bring up the OVS bridge inside the vm the ARP requests only visible at the ens interface but not reaching the OVSbr ... please find attached two files which may help for troubleshooting. One are some network information from inside the Instance that runs the OVS and one ovs-vsctl info of the OpenStack Host. If you need more info/logs please let me know! Thanks for your help! BR Mathias. On 2018-01-27 22:44, Volodymyr Litovka wrote: Hi Mathias, whether you have all corresponding bridges and patches between them as described in openvswitch_agent.ini using integration_bridge tunnel_bridge int_peer_patch_port tun_peer_patch_port bridge_mappings parameters? And make sure, that service "neutron-ovs-cleanup" is in use during system boot. You can check these bridges and patches using "ovs-vsctl show" command. On 1/27/18 9:00 PM, Mathias Strufe (DFKI) wrote: Dear all, I'm quite new to openstack and like to install openVSwtich inside one Instance of our Mitika openstack Lab Enviornment ... But it seems that ARP packets got lost between the network interface of the instance and the OVS bridge ... With tcpdump on the interface I see the APR packets ... tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on ens6, link-type EN10MB (Ethernet), capture size 262144 bytes 18:50:58.080478 ARP, Request who-has 192.168.120.10 tell 192.168.120.6, length 28 18:50:58.125009 ARP, Request who-has 192.168.120.1 tell 192.168.120.6, length 28 18:50:59.077315 ARP, Request who-has 192.168.120.10 tell 192.168.120.6, length 28 18:50:59.121369 ARP, Request who-has 192.168.120.1 tell 192.168.120.6, length 28 18:51:00.077327 ARP, Request who-has 192.168.120.10 tell 192.168.120.6, length 28 18:51:00.121343 ARP, Request who-has 192.168.120.1 tell 192.168.120.6, length 28 but on the OVS bridge nothing arrives ... tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on OVSbr2, link-type EN10MB (Ethernet), capture size 262144 bytes I disabled port_security and removed the security group but nothing changed +-----------------------+---------------------------------------------------------------------------------------+ | Field | Value | +-----------------------+---------------------------------------------------------------------------------------+ | admin_state_up | True | | allowed_address_pairs | | | binding:host_id | node11 | | binding:profile | {} | | binding:vif_details | {"port_filter": true, "ovs_hybrid_plug": true} | | binding:vif_type | ovs | | binding:vnic_type | normal | | created_at | 2018-01-27T16:45:48Z | | description | | | device_id | 74916967-984c-4617-ae33-b847de73de13 | | device_owner | compute:nova | | extra_dhcp_opts | | | fixed_ips | {"subnet_id": "525db7ff-2bf2-4c64-b41e-1e41570ec358", "ip_address": "192.168.120.10"} | | id | 74b754d6-0000-4c2e-bfd1-87f640154ac9 | | mac_address | fa:16:3e:af:90:0c | | name | | | network_id | 917254cb-9721-4207-99c5-8ead9f95d186 | | port_security_enabled | False | | project_id | c48457e73b664147a3d2d36d75dcd155 | | revision_number | 27 | | security_groups | | | status | ACTIVE | | tenant_id | c48457e73b664147a3d2d36d75dcd155 | | updated_at | 2018-01-27T18:54:24Z | +-----------------------+---------------------------------------------------------------------------------------+ maybe the port_filter causes still the problem? But how to disable it? Any other idea? Thanks and BR Mathias. _______________________________________________ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack [1] [1] [1] Post to : openstack at lists.openstack.org Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack [1] [1] [1] -- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison Links: ------ [1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack [1] [1] -- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison _______________________________________________ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack [1] [1] Post to : openstack at lists.openstack.org Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack [1] [1] -- BENJAMÍN DÍAZ Cloud Computing Engineer bdiaz at whitestack.com Links: ------ [1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack [1] -- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison Links: ------ [1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack -- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison -------------- next part -------------- An HTML attachment was scrubbed... URL: From satish.txt at gmail.com Sat Feb 3 06:11:53 2018 From: satish.txt at gmail.com (Satish Patel) Date: Sat, 3 Feb 2018 01:11:53 -0500 Subject: [Openstack] openstack-ansible aio error Message-ID: I have started playing with openstack-ansible on CentOS7 and trying to install All-in-one but got this error and not sure what cause that error how do i troubleshoot it? TASK [bootstrap-host : Remove an existing private/public ssh keys if one is missing] ************************************************************************ skipping: [localhost] => (item=id_rsa) skipping: [localhost] => (item=id_rsa.pub) TASK [bootstrap-host : Create ssh key pair for root] ******************************************************************************************************** ok: [localhost] TASK [bootstrap-host : Fetch the generated public ssh key] ************************************************************************************************** changed: [localhost] TASK [bootstrap-host : Ensure root's new public ssh key is in authorized_keys] ****************************************************************************** ok: [localhost] TASK [bootstrap-host : Create the required deployment directories] ****************************************************************************************** changed: [localhost] => (item=/etc/openstack_deploy) changed: [localhost] => (item=/etc/openstack_deploy/conf.d) changed: [localhost] => (item=/etc/openstack_deploy/env.d) TASK [bootstrap-host : Deploy user conf.d configuration] **************************************************************************************************** fatal: [localhost]: FAILED! => {"msg": "{{ confd_overrides[bootstrap_host_scenario] }}: 'dict object' has no attribute u'aio'"} RUNNING HANDLER [sshd : Reload the SSH service] ************************************************************************************************************* to retry, use: --limit @/opt/openstack-ansible/tests/bootstrap-aio.retry PLAY RECAP ************************************************************************************************************************************************** localhost : ok=61 changed=36 unreachable=0 failed=2 [root at aio openstack-ansible]# From satish.txt at gmail.com Sun Feb 4 00:29:41 2018 From: satish.txt at gmail.com (Satish Patel) Date: Sat, 3 Feb 2018 19:29:41 -0500 Subject: [Openstack] openstack-ansible aio error In-Reply-To: References: Message-ID: I have tired everything but didn't able to find solution :( what i am doing wrong here, i am following this instruction and please let me know if i am wrong https://developer.rackspace.com/blog/life-without-devstack-openstack-development-with-osa/ I have CentOS7, with 8 CPU and 16GB memory with 100GB disk size. Error: http://paste.openstack.org/show/660497/ I have tired gate-check-commit.sh but same error :( On Sat, Feb 3, 2018 at 1:11 AM, Satish Patel wrote: > I have started playing with openstack-ansible on CentOS7 and trying to > install All-in-one but got this error and not sure what cause that > error how do i troubleshoot it? > > > TASK [bootstrap-host : Remove an existing private/public ssh keys if > one is missing] > ************************************************************************ > skipping: [localhost] => (item=id_rsa) > skipping: [localhost] => (item=id_rsa.pub) > > TASK [bootstrap-host : Create ssh key pair for root] > ******************************************************************************************************** > ok: [localhost] > > TASK [bootstrap-host : Fetch the generated public ssh key] > ************************************************************************************************** > changed: [localhost] > > TASK [bootstrap-host : Ensure root's new public ssh key is in > authorized_keys] > ****************************************************************************** > ok: [localhost] > > TASK [bootstrap-host : Create the required deployment directories] > ****************************************************************************************** > changed: [localhost] => (item=/etc/openstack_deploy) > changed: [localhost] => (item=/etc/openstack_deploy/conf.d) > changed: [localhost] => (item=/etc/openstack_deploy/env.d) > > TASK [bootstrap-host : Deploy user conf.d configuration] > **************************************************************************************************** > fatal: [localhost]: FAILED! => {"msg": "{{ > confd_overrides[bootstrap_host_scenario] }}: 'dict object' has no > attribute u'aio'"} > > RUNNING HANDLER [sshd : Reload the SSH service] > ************************************************************************************************************* > to retry, use: --limit @/opt/openstack-ansible/tests/bootstrap-aio.retry > > PLAY RECAP ************************************************************************************************************************************************** > localhost : ok=61 changed=36 unreachable=0 failed=2 > > [root at aio openstack-ansible]# From satish.txt at gmail.com Sun Feb 4 02:26:08 2018 From: satish.txt at gmail.com (Satish Patel) Date: Sat, 3 Feb 2018 21:26:08 -0500 Subject: [Openstack] openstack-ansible aio error In-Reply-To: References: Message-ID: I have re-install centos7 and give it a try and got this error DEBUG MESSAGE RECAP ************************************************************ DEBUG: [Load local packages] *************************************************** All items completed Saturday 03 February 2018 21:04:07 -0500 (0:00:04.175) 0:16:17.204 ***** =============================================================================== repo_build : Create OpenStack-Ansible requirement wheels -------------- 268.16s repo_build : Wait for the venvs builds to complete -------------------- 110.30s repo_build : Install packages ------------------------------------------ 68.26s repo_build : Clone git repositories asynchronously --------------------- 59.85s pip_install : Install distro packages ---------------------------------- 36.72s galera_client : Install galera distro packages ------------------------- 33.21s haproxy_server : Create haproxy service config files ------------------- 30.81s repo_build : Execute the venv build scripts asynchonously -------------- 29.69s pip_install : Install distro packages ---------------------------------- 23.56s repo_server : Install repo server packages ----------------------------- 20.11s memcached_server : Install distro packages ----------------------------- 16.35s repo_build : Create venv build options files --------------------------- 14.57s haproxy_server : Install HAProxy Packages ------------------------------- 8.35s rsyslog_client : Install rsyslog packages ------------------------------- 8.33s rsyslog_client : Install rsyslog packages ------------------------------- 7.64s rsyslog_client : Install rsyslog packages ------------------------------- 7.42s repo_build : Wait for git clones to complete ---------------------------- 7.25s repo_server : Install repo caching server packages ---------------------- 4.76s galera_server : Check that WSREP is ready ------------------------------- 4.18s repo_server : Git service data folder setup ----------------------------- 4.04s ++ exit_fail 341 0 ++ set +x ++ info_block 'Error Info - 341' 0 ++ echo ---------------------------------------------------------------------- ---------------------------------------------------------------------- ++ print_info 'Error Info - 341' 0 ++ PROC_NAME='- [ Error Info - 341 0 ] -' ++ printf '\n%s%s\n' '- [ Error Info - 341 0 ] -' -------------------------------------------- - [ Error Info - 341 0 ] --------------------------------------------- ++ echo ---------------------------------------------------------------------- ---------------------------------------------------------------------- ++ exit_state 1 ++ set +x ---------------------------------------------------------------------- - [ Run Time = 2030 seconds || 33 minutes ] -------------------------- ---------------------------------------------------------------------- ---------------------------------------------------------------------- - [ Status: Failure ] ------------------------------------------------ ---------------------------------------------------------------------- I don't know why it failed but i tried following: [root at aio ~]# lxc-ls -f NAME STATE AUTOSTART GROUPS IPV4 IPV6 aio1_cinder_api_container-2af4dd01 RUNNING 1 onboot, openstack 10.255.255.62, 172.29.238.210, 172.29.244.152 - aio1_cinder_scheduler_container-454db1fb RUNNING 1 onboot, openstack 10.255.255.117, 172.29.239.172 - aio1_designate_container-f7ea3f73 RUNNING 1 onboot, openstack 10.255.255.235, 172.29.239.166 - aio1_galera_container-4f488f6a RUNNING 1 onboot, openstack 10.255.255.193, 172.29.236.69 - aio1_glance_container-f8caa9e6 RUNNING 1 onboot, openstack 10.255.255.225, 172.29.239.52, 172.29.246.25 - aio1_heat_api_container-8321a763 RUNNING 1 onboot, openstack 10.255.255.104, 172.29.236.186 - aio1_heat_apis_container-3f70ad74 RUNNING 1 onboot, openstack 10.255.255.166, 172.29.239.13 - aio1_heat_engine_container-a18e5a0a RUNNING 1 onboot, openstack 10.255.255.118, 172.29.238.7 - aio1_horizon_container-e493275c RUNNING 1 onboot, openstack 10.255.255.98, 172.29.237.43 - aio1_keystone_container-c0e23e14 RUNNING 1 onboot, openstack 10.255.255.60, 172.29.237.165 - aio1_memcached_container-ef8fed4c RUNNING 1 onboot, openstack 10.255.255.214, 172.29.238.211 - aio1_neutron_agents_container-131e996e RUNNING 1 onboot, openstack 10.255.255.153, 172.29.237.246, 172.29.243.227 - aio1_neutron_server_container-ccd69394 RUNNING 1 onboot, openstack 10.255.255.27, 172.29.236.129 - aio1_nova_api_container-73274024 RUNNING 1 onboot, openstack 10.255.255.42, 172.29.238.201 - aio1_nova_api_metadata_container-a1d32282 RUNNING 1 onboot, openstack 10.255.255.218, 172.29.238.153 - aio1_nova_api_os_compute_container-52725940 RUNNING 1 onboot, openstack 10.255.255.109, 172.29.236.126 - aio1_nova_api_placement_container-058e8031 RUNNING 1 onboot, openstack 10.255.255.29, 172.29.236.157 - aio1_nova_conductor_container-9b6b208c RUNNING 1 onboot, openstack 10.255.255.18, 172.29.239.9 - aio1_nova_console_container-0fb8995c RUNNING 1 onboot, openstack 10.255.255.47, 172.29.237.129 - aio1_nova_scheduler_container-8f7a657a RUNNING 1 onboot, openstack 10.255.255.195, 172.29.238.113 - aio1_rabbit_mq_container-c3450d66 RUNNING 1 onboot, openstack 10.255.255.111, 172.29.237.202 - aio1_repo_container-8e07fdef RUNNING 1 onboot, openstack 10.255.255.141, 172.29.239.79 - aio1_rsyslog_container-b198fbe5 RUNNING 1 onboot, openstack 10.255.255.13, 172.29.236.195 - aio1_swift_proxy_container-1a3536e1 RUNNING 1 onboot, openstack 10.255.255.108, 172.29.237.31, 172.29.244.248 - aio1_utility_container-bd106f11 RUNNING 1 onboot, openstack 10.255.255.54, 172.29.239.124 - [root at aio ~]# lxc-a lxc-attach lxc-autostart [root at aio ~]# lxc-attach -n aio1_utility_container-bd106f11 [root at aio1-utility-container-bd106f11 ~]# [root at aio1-utility-container-bd106f11 ~]# source /root/openrc [root at aio1-utility-container-bd106f11 ~]# openstack openstack openstack-host-hostfile-setup.sh [root at aio1-utility-container-bd106f11 ~]# openstack openstack openstack-host-hostfile-setup.sh [root at aio1-utility-container-bd106f11 ~]# openstack user list Failed to discover available identity versions when contacting http://172.29.236.100:5000/v3. Attempting to parse version from URL. Service Unavailable (HTTP 503) [root at aio1-utility-container-bd106f11 ~]# not sure what is this error ? On Sat, Feb 3, 2018 at 7:29 PM, Satish Patel wrote: > I have tired everything but didn't able to find solution :( what i am > doing wrong here, i am following this instruction and please let me > know if i am wrong > > https://developer.rackspace.com/blog/life-without-devstack-openstack-development-with-osa/ > > I have CentOS7, with 8 CPU and 16GB memory with 100GB disk size. > > Error: http://paste.openstack.org/show/660497/ > > > I have tired gate-check-commit.sh but same error :( > > > > On Sat, Feb 3, 2018 at 1:11 AM, Satish Patel wrote: >> I have started playing with openstack-ansible on CentOS7 and trying to >> install All-in-one but got this error and not sure what cause that >> error how do i troubleshoot it? >> >> >> TASK [bootstrap-host : Remove an existing private/public ssh keys if >> one is missing] >> ************************************************************************ >> skipping: [localhost] => (item=id_rsa) >> skipping: [localhost] => (item=id_rsa.pub) >> >> TASK [bootstrap-host : Create ssh key pair for root] >> ******************************************************************************************************** >> ok: [localhost] >> >> TASK [bootstrap-host : Fetch the generated public ssh key] >> ************************************************************************************************** >> changed: [localhost] >> >> TASK [bootstrap-host : Ensure root's new public ssh key is in >> authorized_keys] >> ****************************************************************************** >> ok: [localhost] >> >> TASK [bootstrap-host : Create the required deployment directories] >> ****************************************************************************************** >> changed: [localhost] => (item=/etc/openstack_deploy) >> changed: [localhost] => (item=/etc/openstack_deploy/conf.d) >> changed: [localhost] => (item=/etc/openstack_deploy/env.d) >> >> TASK [bootstrap-host : Deploy user conf.d configuration] >> **************************************************************************************************** >> fatal: [localhost]: FAILED! => {"msg": "{{ >> confd_overrides[bootstrap_host_scenario] }}: 'dict object' has no >> attribute u'aio'"} >> >> RUNNING HANDLER [sshd : Reload the SSH service] >> ************************************************************************************************************* >> to retry, use: --limit @/opt/openstack-ansible/tests/bootstrap-aio.retry >> >> PLAY RECAP ************************************************************************************************************************************************** >> localhost : ok=61 changed=36 unreachable=0 failed=2 >> >> [root at aio openstack-ansible]# From satish.txt at gmail.com Sun Feb 4 02:41:43 2018 From: satish.txt at gmail.com (Satish Patel) Date: Sat, 3 Feb 2018 21:41:43 -0500 Subject: [Openstack] openstack-ansible aio error In-Reply-To: References: Message-ID: I have noticed in output "aio1_galera_container" is failed, how do i fixed this kind of issue? PLAY RECAP ************************************************************************************************************************************************************************** aio1 : ok=41 changed=4 unreachable=0 failed=0 aio1_cinder_api_container-2af4dd01 : ok=0 changed=0 unreachable=0 failed=0 aio1_cinder_scheduler_container-454db1fb : ok=0 changed=0 unreachable=0 failed=0 aio1_designate_container-f7ea3f73 : ok=0 changed=0 unreachable=0 failed=0 aio1_galera_container-4f488f6a : ok=32 changed=3 unreachable=0 failed=1 On Sat, Feb 3, 2018 at 9:26 PM, Satish Patel wrote: > I have re-install centos7 and give it a try and got this error > > DEBUG MESSAGE RECAP ************************************************************ > DEBUG: [Load local packages] *************************************************** > All items completed > > Saturday 03 February 2018 21:04:07 -0500 (0:00:04.175) 0:16:17.204 ***** > =============================================================================== > repo_build : Create OpenStack-Ansible requirement wheels -------------- 268.16s > repo_build : Wait for the venvs builds to complete -------------------- 110.30s > repo_build : Install packages ------------------------------------------ 68.26s > repo_build : Clone git repositories asynchronously --------------------- 59.85s > pip_install : Install distro packages ---------------------------------- 36.72s > galera_client : Install galera distro packages ------------------------- 33.21s > haproxy_server : Create haproxy service config files ------------------- 30.81s > repo_build : Execute the venv build scripts asynchonously -------------- 29.69s > pip_install : Install distro packages ---------------------------------- 23.56s > repo_server : Install repo server packages ----------------------------- 20.11s > memcached_server : Install distro packages ----------------------------- 16.35s > repo_build : Create venv build options files --------------------------- 14.57s > haproxy_server : Install HAProxy Packages ------------------------------- 8.35s > rsyslog_client : Install rsyslog packages ------------------------------- 8.33s > rsyslog_client : Install rsyslog packages ------------------------------- 7.64s > rsyslog_client : Install rsyslog packages ------------------------------- 7.42s > repo_build : Wait for git clones to complete ---------------------------- 7.25s > repo_server : Install repo caching server packages ---------------------- 4.76s > galera_server : Check that WSREP is ready ------------------------------- 4.18s > repo_server : Git service data folder setup ----------------------------- 4.04s > ++ exit_fail 341 0 > ++ set +x > ++ info_block 'Error Info - 341' 0 > ++ echo ---------------------------------------------------------------------- > ---------------------------------------------------------------------- > ++ print_info 'Error Info - 341' 0 > ++ PROC_NAME='- [ Error Info - 341 0 ] -' > ++ printf '\n%s%s\n' '- [ Error Info - 341 0 ] -' > -------------------------------------------- > > - [ Error Info - 341 0 ] --------------------------------------------- > ++ echo ---------------------------------------------------------------------- > ---------------------------------------------------------------------- > ++ exit_state 1 > ++ set +x > ---------------------------------------------------------------------- > > - [ Run Time = 2030 seconds || 33 minutes ] -------------------------- > ---------------------------------------------------------------------- > ---------------------------------------------------------------------- > > - [ Status: Failure ] ------------------------------------------------ > ---------------------------------------------------------------------- > > > > > I don't know why it failed > > but i tried following: > > [root at aio ~]# lxc-ls -f > NAME STATE AUTOSTART GROUPS > IPV4 IPV6 > aio1_cinder_api_container-2af4dd01 RUNNING 1 onboot, > openstack 10.255.255.62, 172.29.238.210, 172.29.244.152 - > aio1_cinder_scheduler_container-454db1fb RUNNING 1 onboot, > openstack 10.255.255.117, 172.29.239.172 - > aio1_designate_container-f7ea3f73 RUNNING 1 onboot, > openstack 10.255.255.235, 172.29.239.166 - > aio1_galera_container-4f488f6a RUNNING 1 onboot, > openstack 10.255.255.193, 172.29.236.69 - > aio1_glance_container-f8caa9e6 RUNNING 1 onboot, > openstack 10.255.255.225, 172.29.239.52, 172.29.246.25 - > aio1_heat_api_container-8321a763 RUNNING 1 onboot, > openstack 10.255.255.104, 172.29.236.186 - > aio1_heat_apis_container-3f70ad74 RUNNING 1 onboot, > openstack 10.255.255.166, 172.29.239.13 - > aio1_heat_engine_container-a18e5a0a RUNNING 1 onboot, > openstack 10.255.255.118, 172.29.238.7 - > aio1_horizon_container-e493275c RUNNING 1 onboot, > openstack 10.255.255.98, 172.29.237.43 - > aio1_keystone_container-c0e23e14 RUNNING 1 onboot, > openstack 10.255.255.60, 172.29.237.165 - > aio1_memcached_container-ef8fed4c RUNNING 1 onboot, > openstack 10.255.255.214, 172.29.238.211 - > aio1_neutron_agents_container-131e996e RUNNING 1 onboot, > openstack 10.255.255.153, 172.29.237.246, 172.29.243.227 - > aio1_neutron_server_container-ccd69394 RUNNING 1 onboot, > openstack 10.255.255.27, 172.29.236.129 - > aio1_nova_api_container-73274024 RUNNING 1 onboot, > openstack 10.255.255.42, 172.29.238.201 - > aio1_nova_api_metadata_container-a1d32282 RUNNING 1 onboot, > openstack 10.255.255.218, 172.29.238.153 - > aio1_nova_api_os_compute_container-52725940 RUNNING 1 onboot, > openstack 10.255.255.109, 172.29.236.126 - > aio1_nova_api_placement_container-058e8031 RUNNING 1 onboot, > openstack 10.255.255.29, 172.29.236.157 - > aio1_nova_conductor_container-9b6b208c RUNNING 1 onboot, > openstack 10.255.255.18, 172.29.239.9 - > aio1_nova_console_container-0fb8995c RUNNING 1 onboot, > openstack 10.255.255.47, 172.29.237.129 - > aio1_nova_scheduler_container-8f7a657a RUNNING 1 onboot, > openstack 10.255.255.195, 172.29.238.113 - > aio1_rabbit_mq_container-c3450d66 RUNNING 1 onboot, > openstack 10.255.255.111, 172.29.237.202 - > aio1_repo_container-8e07fdef RUNNING 1 onboot, > openstack 10.255.255.141, 172.29.239.79 - > aio1_rsyslog_container-b198fbe5 RUNNING 1 onboot, > openstack 10.255.255.13, 172.29.236.195 - > aio1_swift_proxy_container-1a3536e1 RUNNING 1 onboot, > openstack 10.255.255.108, 172.29.237.31, 172.29.244.248 - > aio1_utility_container-bd106f11 RUNNING 1 onboot, > openstack 10.255.255.54, 172.29.239.124 - > [root at aio ~]# lxc-a > lxc-attach lxc-autostart > [root at aio ~]# lxc-attach -n aio1_utility_container-bd106f11 > [root at aio1-utility-container-bd106f11 ~]# > [root at aio1-utility-container-bd106f11 ~]# source /root/openrc > [root at aio1-utility-container-bd106f11 ~]# openstack > openstack openstack-host-hostfile-setup.sh > [root at aio1-utility-container-bd106f11 ~]# openstack > openstack openstack-host-hostfile-setup.sh > [root at aio1-utility-container-bd106f11 ~]# openstack user list > Failed to discover available identity versions when contacting > http://172.29.236.100:5000/v3. Attempting to parse version from URL. > Service Unavailable (HTTP 503) > [root at aio1-utility-container-bd106f11 ~]# > > > not sure what is this error ? > > > On Sat, Feb 3, 2018 at 7:29 PM, Satish Patel wrote: >> I have tired everything but didn't able to find solution :( what i am >> doing wrong here, i am following this instruction and please let me >> know if i am wrong >> >> https://developer.rackspace.com/blog/life-without-devstack-openstack-development-with-osa/ >> >> I have CentOS7, with 8 CPU and 16GB memory with 100GB disk size. >> >> Error: http://paste.openstack.org/show/660497/ >> >> >> I have tired gate-check-commit.sh but same error :( >> >> >> >> On Sat, Feb 3, 2018 at 1:11 AM, Satish Patel wrote: >>> I have started playing with openstack-ansible on CentOS7 and trying to >>> install All-in-one but got this error and not sure what cause that >>> error how do i troubleshoot it? >>> >>> >>> TASK [bootstrap-host : Remove an existing private/public ssh keys if >>> one is missing] >>> ************************************************************************ >>> skipping: [localhost] => (item=id_rsa) >>> skipping: [localhost] => (item=id_rsa.pub) >>> >>> TASK [bootstrap-host : Create ssh key pair for root] >>> ******************************************************************************************************** >>> ok: [localhost] >>> >>> TASK [bootstrap-host : Fetch the generated public ssh key] >>> ************************************************************************************************** >>> changed: [localhost] >>> >>> TASK [bootstrap-host : Ensure root's new public ssh key is in >>> authorized_keys] >>> ****************************************************************************** >>> ok: [localhost] >>> >>> TASK [bootstrap-host : Create the required deployment directories] >>> ****************************************************************************************** >>> changed: [localhost] => (item=/etc/openstack_deploy) >>> changed: [localhost] => (item=/etc/openstack_deploy/conf.d) >>> changed: [localhost] => (item=/etc/openstack_deploy/env.d) >>> >>> TASK [bootstrap-host : Deploy user conf.d configuration] >>> **************************************************************************************************** >>> fatal: [localhost]: FAILED! => {"msg": "{{ >>> confd_overrides[bootstrap_host_scenario] }}: 'dict object' has no >>> attribute u'aio'"} >>> >>> RUNNING HANDLER [sshd : Reload the SSH service] >>> ************************************************************************************************************* >>> to retry, use: --limit @/opt/openstack-ansible/tests/bootstrap-aio.retry >>> >>> PLAY RECAP ************************************************************************************************************************************************** >>> localhost : ok=61 changed=36 unreachable=0 failed=2 >>> >>> [root at aio openstack-ansible]# From marcin.dulak at gmail.com Sun Feb 4 10:29:22 2018 From: marcin.dulak at gmail.com (Marcin Dulak) Date: Sun, 4 Feb 2018 11:29:22 +0100 Subject: [Openstack] openstack-ansible aio error In-Reply-To: References: Message-ID: When playing with openstack-ansible do it in a virtual setup (e.g. nested virtualization with libvirt) so you can reproducibly bring up your environment from scratch. You will have to do it multiple times. https://developer.rackspace.com/blog/life-without-devstack-openstack- development-with-osa/ is more than 2 years old. Try to follow https://docs.openstack.org/openstack-ansible/latest/contributor/quickstart-aio.html but git clone the latest state of the openstack-ansible repo. The above page has a link that can be used to submit bugs directly to the openstack-ansible project at launchpad. In this way you may be able to cleanup/improve the documentation, and since your setup is the simplest possible one your bug reports may get noticed and reproduced by the developers. What happens is that most people try openstack-ansible, don't report bugs, or report the bugs without the information neccesary to reproduce them, and abandon the whole idea. Try to search https://bugs.launchpad.net/openstack-ansible/+bugs?field.searchtext=galera for inspiration about what to do. Currently the galera setup in openstack-ansible, especially on centos7 seems to be undergoing some critical changes. Enter the galera container: lxc-attach -n aio1_galera_container-4f488f6a look around it, check whether mysqld is running etc., try to identify which ansible tasks failed and run them manually inside of the container. Marcin On Sun, Feb 4, 2018 at 3:41 AM, Satish Patel wrote: > I have noticed in output "aio1_galera_container" is failed, how do i > fixed this kind of issue? > > > > PLAY RECAP ************************************************************ > ************************************************************ > ************************************************** > aio1 : ok=41 changed=4 unreachable=0 failed=0 > aio1_cinder_api_container-2af4dd01 : ok=0 changed=0 > unreachable=0 failed=0 > aio1_cinder_scheduler_container-454db1fb : ok=0 changed=0 > unreachable=0 failed=0 > aio1_designate_container-f7ea3f73 : ok=0 changed=0 unreachable=0 > failed=0 > aio1_galera_container-4f488f6a : ok=32 changed=3 unreachable=0 > failed=1 > > On Sat, Feb 3, 2018 at 9:26 PM, Satish Patel wrote: > > I have re-install centos7 and give it a try and got this error > > > > DEBUG MESSAGE RECAP ****************************** > ****************************** > > DEBUG: [Load local packages] ****************************** > ********************* > > All items completed > > > > Saturday 03 February 2018 21:04:07 -0500 (0:00:04.175) > 0:16:17.204 ***** > > ============================================================ > =================== > > repo_build : Create OpenStack-Ansible requirement wheels -------------- > 268.16s > > repo_build : Wait for the venvs builds to complete -------------------- > 110.30s > > repo_build : Install packages ------------------------------------------ > 68.26s > > repo_build : Clone git repositories asynchronously --------------------- > 59.85s > > pip_install : Install distro packages ---------------------------------- > 36.72s > > galera_client : Install galera distro packages ------------------------- > 33.21s > > haproxy_server : Create haproxy service config files ------------------- > 30.81s > > repo_build : Execute the venv build scripts asynchonously -------------- > 29.69s > > pip_install : Install distro packages ---------------------------------- > 23.56s > > repo_server : Install repo server packages ----------------------------- > 20.11s > > memcached_server : Install distro packages ----------------------------- > 16.35s > > repo_build : Create venv build options files --------------------------- > 14.57s > > haproxy_server : Install HAProxy Packages ------------------------------- > 8.35s > > rsyslog_client : Install rsyslog packages ------------------------------- > 8.33s > > rsyslog_client : Install rsyslog packages ------------------------------- > 7.64s > > rsyslog_client : Install rsyslog packages ------------------------------- > 7.42s > > repo_build : Wait for git clones to complete > ---------------------------- 7.25s > > repo_server : Install repo caching server packages > ---------------------- 4.76s > > galera_server : Check that WSREP is ready ------------------------------- > 4.18s > > repo_server : Git service data folder setup > ----------------------------- 4.04s > > ++ exit_fail 341 0 > > ++ set +x > > ++ info_block 'Error Info - 341' 0 > > ++ echo ------------------------------------------------------------ > ---------- > > ---------------------------------------------------------------------- > > ++ print_info 'Error Info - 341' 0 > > ++ PROC_NAME='- [ Error Info - 341 0 ] -' > > ++ printf '\n%s%s\n' '- [ Error Info - 341 0 ] -' > > -------------------------------------------- > > > > - [ Error Info - 341 0 ] --------------------------------------------- > > ++ echo ------------------------------------------------------------ > ---------- > > ---------------------------------------------------------------------- > > ++ exit_state 1 > > ++ set +x > > ---------------------------------------------------------------------- > > > > - [ Run Time = 2030 seconds || 33 minutes ] -------------------------- > > ---------------------------------------------------------------------- > > ---------------------------------------------------------------------- > > > > - [ Status: Failure ] ------------------------------------------------ > > ---------------------------------------------------------------------- > > > > > > > > > > I don't know why it failed > > > > but i tried following: > > > > [root at aio ~]# lxc-ls -f > > NAME STATE AUTOSTART GROUPS > > IPV4 IPV6 > > aio1_cinder_api_container-2af4dd01 RUNNING 1 onboot, > > openstack 10.255.255.62, 172.29.238.210, 172.29.244.152 - > > aio1_cinder_scheduler_container-454db1fb RUNNING 1 onboot, > > openstack 10.255.255.117, 172.29.239.172 - > > aio1_designate_container-f7ea3f73 RUNNING 1 onboot, > > openstack 10.255.255.235, 172.29.239.166 - > > aio1_galera_container-4f488f6a RUNNING 1 onboot, > > openstack 10.255.255.193, 172.29.236.69 - > > aio1_glance_container-f8caa9e6 RUNNING 1 onboot, > > openstack 10.255.255.225, 172.29.239.52, 172.29.246.25 - > > aio1_heat_api_container-8321a763 RUNNING 1 onboot, > > openstack 10.255.255.104, 172.29.236.186 - > > aio1_heat_apis_container-3f70ad74 RUNNING 1 onboot, > > openstack 10.255.255.166, 172.29.239.13 - > > aio1_heat_engine_container-a18e5a0a RUNNING 1 onboot, > > openstack 10.255.255.118, 172.29.238.7 - > > aio1_horizon_container-e493275c RUNNING 1 onboot, > > openstack 10.255.255.98, 172.29.237.43 - > > aio1_keystone_container-c0e23e14 RUNNING 1 onboot, > > openstack 10.255.255.60, 172.29.237.165 - > > aio1_memcached_container-ef8fed4c RUNNING 1 onboot, > > openstack 10.255.255.214, 172.29.238.211 - > > aio1_neutron_agents_container-131e996e RUNNING 1 onboot, > > openstack 10.255.255.153, 172.29.237.246, 172.29.243.227 - > > aio1_neutron_server_container-ccd69394 RUNNING 1 onboot, > > openstack 10.255.255.27, 172.29.236.129 - > > aio1_nova_api_container-73274024 RUNNING 1 onboot, > > openstack 10.255.255.42, 172.29.238.201 - > > aio1_nova_api_metadata_container-a1d32282 RUNNING 1 onboot, > > openstack 10.255.255.218, 172.29.238.153 - > > aio1_nova_api_os_compute_container-52725940 RUNNING 1 onboot, > > openstack 10.255.255.109, 172.29.236.126 - > > aio1_nova_api_placement_container-058e8031 RUNNING 1 onboot, > > openstack 10.255.255.29, 172.29.236.157 - > > aio1_nova_conductor_container-9b6b208c RUNNING 1 onboot, > > openstack 10.255.255.18, 172.29.239.9 - > > aio1_nova_console_container-0fb8995c RUNNING 1 onboot, > > openstack 10.255.255.47, 172.29.237.129 - > > aio1_nova_scheduler_container-8f7a657a RUNNING 1 onboot, > > openstack 10.255.255.195, 172.29.238.113 - > > aio1_rabbit_mq_container-c3450d66 RUNNING 1 onboot, > > openstack 10.255.255.111, 172.29.237.202 - > > aio1_repo_container-8e07fdef RUNNING 1 onboot, > > openstack 10.255.255.141, 172.29.239.79 - > > aio1_rsyslog_container-b198fbe5 RUNNING 1 onboot, > > openstack 10.255.255.13, 172.29.236.195 - > > aio1_swift_proxy_container-1a3536e1 RUNNING 1 onboot, > > openstack 10.255.255.108, 172.29.237.31, 172.29.244.248 - > > aio1_utility_container-bd106f11 RUNNING 1 onboot, > > openstack 10.255.255.54, 172.29.239.124 - > > [root at aio ~]# lxc-a > > lxc-attach lxc-autostart > > [root at aio ~]# lxc-attach -n aio1_utility_container-bd106f11 > > [root at aio1-utility-container-bd106f11 ~]# > > [root at aio1-utility-container-bd106f11 ~]# source /root/openrc > > [root at aio1-utility-container-bd106f11 ~]# openstack > > openstack openstack-host-hostfile-setup.sh > > [root at aio1-utility-container-bd106f11 ~]# openstack > > openstack openstack-host-hostfile-setup.sh > > [root at aio1-utility-container-bd106f11 ~]# openstack user list > > Failed to discover available identity versions when contacting > > http://172.29.236.100:5000/v3. Attempting to parse version from URL. > > Service Unavailable (HTTP 503) > > [root at aio1-utility-container-bd106f11 ~]# > > > > > > not sure what is this error ? > > > > > > On Sat, Feb 3, 2018 at 7:29 PM, Satish Patel > wrote: > >> I have tired everything but didn't able to find solution :( what i am > >> doing wrong here, i am following this instruction and please let me > >> know if i am wrong > >> > >> https://developer.rackspace.com/blog/life-without-devstack-openstack- > development-with-osa/ > >> > >> I have CentOS7, with 8 CPU and 16GB memory with 100GB disk size. > >> > >> Error: http://paste.openstack.org/show/660497/ > >> > >> > >> I have tired gate-check-commit.sh but same error :( > >> > >> > >> > >> On Sat, Feb 3, 2018 at 1:11 AM, Satish Patel > wrote: > >>> I have started playing with openstack-ansible on CentOS7 and trying to > >>> install All-in-one but got this error and not sure what cause that > >>> error how do i troubleshoot it? > >>> > >>> > >>> TASK [bootstrap-host : Remove an existing private/public ssh keys if > >>> one is missing] > >>> ************************************************************ > ************ > >>> skipping: [localhost] => (item=id_rsa) > >>> skipping: [localhost] => (item=id_rsa.pub) > >>> > >>> TASK [bootstrap-host : Create ssh key pair for root] > >>> ************************************************************ > ******************************************** > >>> ok: [localhost] > >>> > >>> TASK [bootstrap-host : Fetch the generated public ssh key] > >>> ************************************************************ > ************************************** > >>> changed: [localhost] > >>> > >>> TASK [bootstrap-host : Ensure root's new public ssh key is in > >>> authorized_keys] > >>> ************************************************************ > ****************** > >>> ok: [localhost] > >>> > >>> TASK [bootstrap-host : Create the required deployment directories] > >>> ************************************************************ > ****************************** > >>> changed: [localhost] => (item=/etc/openstack_deploy) > >>> changed: [localhost] => (item=/etc/openstack_deploy/conf.d) > >>> changed: [localhost] => (item=/etc/openstack_deploy/env.d) > >>> > >>> TASK [bootstrap-host : Deploy user conf.d configuration] > >>> ************************************************************ > **************************************** > >>> fatal: [localhost]: FAILED! => {"msg": "{{ > >>> confd_overrides[bootstrap_host_scenario] }}: 'dict object' has no > >>> attribute u'aio'"} > >>> > >>> RUNNING HANDLER [sshd : Reload the SSH service] > >>> ************************************************************ > ************************************************* > >>> to retry, use: --limit @/opt/openstack-ansible/tests/ > bootstrap-aio.retry > >>> > >>> PLAY RECAP ****************************** > ************************************************************ > ******************************************************** > >>> localhost : ok=61 changed=36 unreachable=0 > failed=2 > >>> > >>> [root at aio openstack-ansible]# > > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/ > openstack > Post to : openstack at lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/ > openstack > -------------- next part -------------- An HTML attachment was scrubbed... URL: From satish.txt at gmail.com Sun Feb 4 15:53:34 2018 From: satish.txt at gmail.com (Satish Patel) Date: Sun, 4 Feb 2018 10:53:34 -0500 Subject: [Openstack] openstack-ansible aio error In-Reply-To: References: Message-ID: Hi Marcin, Thank you, i will try other link, also i am using CentOS7 but anyway now question is does openstack-ansible ready for production deployment despite galera issues and bug? If i want to go on production should i wait or find other tools to deploy on production? On Sun, Feb 4, 2018 at 5:29 AM, Marcin Dulak wrote: > When playing with openstack-ansible do it in a virtual setup (e.g. nested > virtualization with libvirt) so you can reproducibly bring up your > environment from scratch. > You will have to do it multiple times. > > https://developer.rackspace.com/blog/life-without-devstack-openstack-development-with-osa/ > is more than 2 years old. > > Try to follow > https://docs.openstack.org/openstack-ansible/latest/contributor/quickstart-aio.html > but git clone the latest state of the openstack-ansible repo. > The above page has a link that can be used to submit bugs directly to the > openstack-ansible project at launchpad. > In this way you may be able to cleanup/improve the documentation, > and since your setup is the simplest possible one your bug reports may get > noticed and reproduced by the developers. > What happens is that most people try openstack-ansible, don't report bugs, > or report the bugs without the information neccesary > to reproduce them, and abandon the whole idea. > > Try to search > https://bugs.launchpad.net/openstack-ansible/+bugs?field.searchtext=galera > for inspiration about what to do. > Currently the galera setup in openstack-ansible, especially on centos7 seems > to be undergoing some critical changes. > Enter the galera container: > lxc-attach -n aio1_galera_container-4f488f6a > look around it, check whether mysqld is running etc., try to identify which > ansible tasks failed and run them manually inside of the container. > > Marcin > > > On Sun, Feb 4, 2018 at 3:41 AM, Satish Patel wrote: >> >> I have noticed in output "aio1_galera_container" is failed, how do i >> fixed this kind of issue? >> >> >> >> PLAY RECAP >> ************************************************************************************************************************************************************************** >> aio1 : ok=41 changed=4 unreachable=0 >> failed=0 >> aio1_cinder_api_container-2af4dd01 : ok=0 changed=0 >> unreachable=0 failed=0 >> aio1_cinder_scheduler_container-454db1fb : ok=0 changed=0 >> unreachable=0 failed=0 >> aio1_designate_container-f7ea3f73 : ok=0 changed=0 unreachable=0 >> failed=0 >> aio1_galera_container-4f488f6a : ok=32 changed=3 unreachable=0 >> failed=1 >> >> On Sat, Feb 3, 2018 at 9:26 PM, Satish Patel wrote: >> > I have re-install centos7 and give it a try and got this error >> > >> > DEBUG MESSAGE RECAP >> > ************************************************************ >> > DEBUG: [Load local packages] >> > *************************************************** >> > All items completed >> > >> > Saturday 03 February 2018 21:04:07 -0500 (0:00:04.175) >> > 0:16:17.204 ***** >> > >> > =============================================================================== >> > repo_build : Create OpenStack-Ansible requirement wheels -------------- >> > 268.16s >> > repo_build : Wait for the venvs builds to complete -------------------- >> > 110.30s >> > repo_build : Install packages ------------------------------------------ >> > 68.26s >> > repo_build : Clone git repositories asynchronously --------------------- >> > 59.85s >> > pip_install : Install distro packages ---------------------------------- >> > 36.72s >> > galera_client : Install galera distro packages ------------------------- >> > 33.21s >> > haproxy_server : Create haproxy service config files ------------------- >> > 30.81s >> > repo_build : Execute the venv build scripts asynchonously -------------- >> > 29.69s >> > pip_install : Install distro packages ---------------------------------- >> > 23.56s >> > repo_server : Install repo server packages ----------------------------- >> > 20.11s >> > memcached_server : Install distro packages ----------------------------- >> > 16.35s >> > repo_build : Create venv build options files --------------------------- >> > 14.57s >> > haproxy_server : Install HAProxy Packages >> > ------------------------------- 8.35s >> > rsyslog_client : Install rsyslog packages >> > ------------------------------- 8.33s >> > rsyslog_client : Install rsyslog packages >> > ------------------------------- 7.64s >> > rsyslog_client : Install rsyslog packages >> > ------------------------------- 7.42s >> > repo_build : Wait for git clones to complete >> > ---------------------------- 7.25s >> > repo_server : Install repo caching server packages >> > ---------------------- 4.76s >> > galera_server : Check that WSREP is ready >> > ------------------------------- 4.18s >> > repo_server : Git service data folder setup >> > ----------------------------- 4.04s >> > ++ exit_fail 341 0 >> > ++ set +x >> > ++ info_block 'Error Info - 341' 0 >> > ++ echo >> > ---------------------------------------------------------------------- >> > ---------------------------------------------------------------------- >> > ++ print_info 'Error Info - 341' 0 >> > ++ PROC_NAME='- [ Error Info - 341 0 ] -' >> > ++ printf '\n%s%s\n' '- [ Error Info - 341 0 ] -' >> > -------------------------------------------- >> > >> > - [ Error Info - 341 0 ] --------------------------------------------- >> > ++ echo >> > ---------------------------------------------------------------------- >> > ---------------------------------------------------------------------- >> > ++ exit_state 1 >> > ++ set +x >> > ---------------------------------------------------------------------- >> > >> > - [ Run Time = 2030 seconds || 33 minutes ] -------------------------- >> > ---------------------------------------------------------------------- >> > ---------------------------------------------------------------------- >> > >> > - [ Status: Failure ] ------------------------------------------------ >> > ---------------------------------------------------------------------- >> > >> > >> > >> > >> > I don't know why it failed >> > >> > but i tried following: >> > >> > [root at aio ~]# lxc-ls -f >> > NAME STATE AUTOSTART GROUPS >> > IPV4 IPV6 >> > aio1_cinder_api_container-2af4dd01 RUNNING 1 onboot, >> > openstack 10.255.255.62, 172.29.238.210, 172.29.244.152 - >> > aio1_cinder_scheduler_container-454db1fb RUNNING 1 onboot, >> > openstack 10.255.255.117, 172.29.239.172 - >> > aio1_designate_container-f7ea3f73 RUNNING 1 onboot, >> > openstack 10.255.255.235, 172.29.239.166 - >> > aio1_galera_container-4f488f6a RUNNING 1 onboot, >> > openstack 10.255.255.193, 172.29.236.69 - >> > aio1_glance_container-f8caa9e6 RUNNING 1 onboot, >> > openstack 10.255.255.225, 172.29.239.52, 172.29.246.25 - >> > aio1_heat_api_container-8321a763 RUNNING 1 onboot, >> > openstack 10.255.255.104, 172.29.236.186 - >> > aio1_heat_apis_container-3f70ad74 RUNNING 1 onboot, >> > openstack 10.255.255.166, 172.29.239.13 - >> > aio1_heat_engine_container-a18e5a0a RUNNING 1 onboot, >> > openstack 10.255.255.118, 172.29.238.7 - >> > aio1_horizon_container-e493275c RUNNING 1 onboot, >> > openstack 10.255.255.98, 172.29.237.43 - >> > aio1_keystone_container-c0e23e14 RUNNING 1 onboot, >> > openstack 10.255.255.60, 172.29.237.165 - >> > aio1_memcached_container-ef8fed4c RUNNING 1 onboot, >> > openstack 10.255.255.214, 172.29.238.211 - >> > aio1_neutron_agents_container-131e996e RUNNING 1 onboot, >> > openstack 10.255.255.153, 172.29.237.246, 172.29.243.227 - >> > aio1_neutron_server_container-ccd69394 RUNNING 1 onboot, >> > openstack 10.255.255.27, 172.29.236.129 - >> > aio1_nova_api_container-73274024 RUNNING 1 onboot, >> > openstack 10.255.255.42, 172.29.238.201 - >> > aio1_nova_api_metadata_container-a1d32282 RUNNING 1 onboot, >> > openstack 10.255.255.218, 172.29.238.153 - >> > aio1_nova_api_os_compute_container-52725940 RUNNING 1 onboot, >> > openstack 10.255.255.109, 172.29.236.126 - >> > aio1_nova_api_placement_container-058e8031 RUNNING 1 onboot, >> > openstack 10.255.255.29, 172.29.236.157 - >> > aio1_nova_conductor_container-9b6b208c RUNNING 1 onboot, >> > openstack 10.255.255.18, 172.29.239.9 - >> > aio1_nova_console_container-0fb8995c RUNNING 1 onboot, >> > openstack 10.255.255.47, 172.29.237.129 - >> > aio1_nova_scheduler_container-8f7a657a RUNNING 1 onboot, >> > openstack 10.255.255.195, 172.29.238.113 - >> > aio1_rabbit_mq_container-c3450d66 RUNNING 1 onboot, >> > openstack 10.255.255.111, 172.29.237.202 - >> > aio1_repo_container-8e07fdef RUNNING 1 onboot, >> > openstack 10.255.255.141, 172.29.239.79 - >> > aio1_rsyslog_container-b198fbe5 RUNNING 1 onboot, >> > openstack 10.255.255.13, 172.29.236.195 - >> > aio1_swift_proxy_container-1a3536e1 RUNNING 1 onboot, >> > openstack 10.255.255.108, 172.29.237.31, 172.29.244.248 - >> > aio1_utility_container-bd106f11 RUNNING 1 onboot, >> > openstack 10.255.255.54, 172.29.239.124 - >> > [root at aio ~]# lxc-a >> > lxc-attach lxc-autostart >> > [root at aio ~]# lxc-attach -n aio1_utility_container-bd106f11 >> > [root at aio1-utility-container-bd106f11 ~]# >> > [root at aio1-utility-container-bd106f11 ~]# source /root/openrc >> > [root at aio1-utility-container-bd106f11 ~]# openstack >> > openstack openstack-host-hostfile-setup.sh >> > [root at aio1-utility-container-bd106f11 ~]# openstack >> > openstack openstack-host-hostfile-setup.sh >> > [root at aio1-utility-container-bd106f11 ~]# openstack user list >> > Failed to discover available identity versions when contacting >> > http://172.29.236.100:5000/v3. Attempting to parse version from URL. >> > Service Unavailable (HTTP 503) >> > [root at aio1-utility-container-bd106f11 ~]# >> > >> > >> > not sure what is this error ? >> > >> > >> > On Sat, Feb 3, 2018 at 7:29 PM, Satish Patel >> > wrote: >> >> I have tired everything but didn't able to find solution :( what i am >> >> doing wrong here, i am following this instruction and please let me >> >> know if i am wrong >> >> >> >> >> >> https://developer.rackspace.com/blog/life-without-devstack-openstack-development-with-osa/ >> >> >> >> I have CentOS7, with 8 CPU and 16GB memory with 100GB disk size. >> >> >> >> Error: http://paste.openstack.org/show/660497/ >> >> >> >> >> >> I have tired gate-check-commit.sh but same error :( >> >> >> >> >> >> >> >> On Sat, Feb 3, 2018 at 1:11 AM, Satish Patel >> >> wrote: >> >>> I have started playing with openstack-ansible on CentOS7 and trying to >> >>> install All-in-one but got this error and not sure what cause that >> >>> error how do i troubleshoot it? >> >>> >> >>> >> >>> TASK [bootstrap-host : Remove an existing private/public ssh keys if >> >>> one is missing] >> >>> >> >>> ************************************************************************ >> >>> skipping: [localhost] => (item=id_rsa) >> >>> skipping: [localhost] => (item=id_rsa.pub) >> >>> >> >>> TASK [bootstrap-host : Create ssh key pair for root] >> >>> >> >>> ******************************************************************************************************** >> >>> ok: [localhost] >> >>> >> >>> TASK [bootstrap-host : Fetch the generated public ssh key] >> >>> >> >>> ************************************************************************************************** >> >>> changed: [localhost] >> >>> >> >>> TASK [bootstrap-host : Ensure root's new public ssh key is in >> >>> authorized_keys] >> >>> >> >>> ****************************************************************************** >> >>> ok: [localhost] >> >>> >> >>> TASK [bootstrap-host : Create the required deployment directories] >> >>> >> >>> ****************************************************************************************** >> >>> changed: [localhost] => (item=/etc/openstack_deploy) >> >>> changed: [localhost] => (item=/etc/openstack_deploy/conf.d) >> >>> changed: [localhost] => (item=/etc/openstack_deploy/env.d) >> >>> >> >>> TASK [bootstrap-host : Deploy user conf.d configuration] >> >>> >> >>> **************************************************************************************************** >> >>> fatal: [localhost]: FAILED! => {"msg": "{{ >> >>> confd_overrides[bootstrap_host_scenario] }}: 'dict object' has no >> >>> attribute u'aio'"} >> >>> >> >>> RUNNING HANDLER [sshd : Reload the SSH service] >> >>> >> >>> ************************************************************************************************************* >> >>> to retry, use: --limit >> >>> @/opt/openstack-ansible/tests/bootstrap-aio.retry >> >>> >> >>> PLAY RECAP >> >>> ************************************************************************************************************************************************** >> >>> localhost : ok=61 changed=36 unreachable=0 >> >>> failed=2 >> >>> >> >>> [root at aio openstack-ansible]# >> >> _______________________________________________ >> Mailing list: >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >> Post to : openstack at lists.openstack.org >> Unsubscribe : >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > > From remo at italy1.com Sun Feb 4 16:21:25 2018 From: remo at italy1.com (remo at italy1.com) Date: Sun, 4 Feb 2018 08:21:25 -0800 Subject: [Openstack] openstack-ansible aio error In-Reply-To: References: Message-ID: <2D5C1208-BA0A-491D-B232-6B0AB3EBC8CF@italy1.com> Content-Type: multipart/alternative; boundary="=_4a0d4f5f19567da41f4fbcb491bb9d5b" --=_4a0d4f5f19567da41f4fbcb491bb9d5b Content-Transfer-Encoding: base64 Content-Type: text/plain; charset=utf-8 V2hhdCBhcmUgeW91IGxvb2tpbmcgZm9yIGhhPyBFdGMuIFRyaXBsZW8gaXMgdGhlIHdheSB0byBn byBmb3IgdGhhdCBwYWNrc3RhY2sgaWYgeW91IHdhbnQgc2ltcGxlIGRlcGxveW1lbnQgYnV0IG5v IGhhIG9mIGNvdXJzZS4gDQoNCj4gSWwgZ2lvcm5vIDA0IGZlYiAyMDE4LCBhbGxlIG9yZSAwNzo1 MywgU2F0aXNoIFBhdGVsIDxzYXRpc2gudHh0QGdtYWlsLmNvbT4gaGEgc2NyaXR0bzoNCj4gDQo+ IEhpIE1hcmNpbiwNCj4gDQo+IFRoYW5rIHlvdSwgaSB3aWxsIHRyeSBvdGhlciBsaW5rLCBhbHNv IGkgYW0gdXNpbmcgQ2VudE9TNyBidXQgYW55d2F5DQo+IG5vdyBxdWVzdGlvbiBpcyBkb2VzIG9w ZW5zdGFjay1hbnNpYmxlIHJlYWR5IGZvciBwcm9kdWN0aW9uIGRlcGxveW1lbnQNCj4gZGVzcGl0 ZSBnYWxlcmEgaXNzdWVzIGFuZCBidWc/DQo+IA0KPiBJZiBpIHdhbnQgdG8gZ28gb24gcHJvZHVj dGlvbiBzaG91bGQgaSB3YWl0IG9yIGZpbmQgb3RoZXIgdG9vbHMgdG8NCj4gZGVwbG95IG9uIHBy b2R1Y3Rpb24/DQo+IA0KPj4gT24gU3VuLCBGZWIgNCwgMjAxOCBhdCA1OjI5IEFNLCBNYXJjaW4g RHVsYWsgPG1hcmNpbi5kdWxha0BnbWFpbC5jb20+IHdyb3RlOg0KPj4gV2hlbiBwbGF5aW5nIHdp dGggb3BlbnN0YWNrLWFuc2libGUgZG8gaXQgaW4gYSB2aXJ0dWFsIHNldHVwIChlLmcuIG5lc3Rl ZA0KPj4gdmlydHVhbGl6YXRpb24gd2l0aCBsaWJ2aXJ0KSBzbyB5b3UgY2FuIHJlcHJvZHVjaWJs eSBicmluZyB1cCB5b3VyDQo+PiBlbnZpcm9ubWVudCBmcm9tIHNjcmF0Y2guDQo+PiBZb3Ugd2ls bCBoYXZlIHRvIGRvIGl0IG11bHRpcGxlIHRpbWVzLg0KPj4gDQo+PiBodHRwczovL2RldmVsb3Bl ci5yYWNrc3BhY2UuY29tL2Jsb2cvbGlmZS13aXRob3V0LWRldnN0YWNrLW9wZW5zdGFjay1kZXZl bG9wbWVudC13aXRoLW9zYS8NCj4+IGlzIG1vcmUgdGhhbiAyIHllYXJzIG9sZC4NCj4+IA0KPj4g VHJ5IHRvIGZvbGxvdw0KPj4gaHR0cHM6Ly9kb2NzLm9wZW5zdGFjay5vcmcvb3BlbnN0YWNrLWFu c2libGUvbGF0ZXN0L2NvbnRyaWJ1dG9yL3F1aWNrc3RhcnQtYWlvLmh0bWwNCj4+IGJ1dCBnaXQg Y2xvbmUgdGhlIGxhdGVzdCBzdGF0ZSBvZiB0aGUgb3BlbnN0YWNrLWFuc2libGUgcmVwby4NCj4+ IFRoZSBhYm92ZSBwYWdlIGhhcyBhIGxpbmsgdGhhdCBjYW4gYmUgdXNlZCB0byBzdWJtaXQgYnVn cyBkaXJlY3RseSB0byB0aGUNCj4+IG9wZW5zdGFjay1hbnNpYmxlIHByb2plY3QgYXQgbGF1bmNo cGFkLg0KPj4gSW4gdGhpcyB3YXkgeW91IG1heSBiZSBhYmxlIHRvIGNsZWFudXAvaW1wcm92ZSB0 aGUgZG9jdW1lbnRhdGlvbiwNCj4+IGFuZCBzaW5jZSB5b3VyIHNldHVwIGlzIHRoZSBzaW1wbGVz dCBwb3NzaWJsZSBvbmUgeW91ciBidWcgcmVwb3J0cyBtYXkgZ2V0DQo+PiBub3RpY2VkIGFuZCBy ZXByb2R1Y2VkIGJ5IHRoZSBkZXZlbG9wZXJzLg0KPj4gV2hhdCBoYXBwZW5zIGlzIHRoYXQgbW9z dCBwZW9wbGUgdHJ5IG9wZW5zdGFjay1hbnNpYmxlLCBkb24ndCByZXBvcnQgYnVncywNCj4+IG9y IHJlcG9ydCB0aGUgYnVncyB3aXRob3V0IHRoZSBpbmZvcm1hdGlvbiBuZWNjZXNhcnkNCj4+IHRv IHJlcHJvZHVjZSB0aGVtLCBhbmQgYWJhbmRvbiB0aGUgd2hvbGUgaWRlYS4NCj4+IA0KPj4gVHJ5 IHRvIHNlYXJjaA0KPj4gaHR0cHM6Ly9idWdzLmxhdW5jaHBhZC5uZXQvb3BlbnN0YWNrLWFuc2li bGUvK2J1Z3M/ZmllbGQuc2VhcmNodGV4dD1nYWxlcmENCj4+IGZvciBpbnNwaXJhdGlvbiBhYm91 dCB3aGF0IHRvIGRvLg0KPj4gQ3VycmVudGx5IHRoZSBnYWxlcmEgc2V0dXAgaW4gb3BlbnN0YWNr LWFuc2libGUsIGVzcGVjaWFsbHkgb24gY2VudG9zNyBzZWVtcw0KPj4gdG8gYmUgdW5kZXJnb2lu ZyBzb21lIGNyaXRpY2FsIGNoYW5nZXMuDQo+PiBFbnRlciB0aGUgZ2FsZXJhIGNvbnRhaW5lcjoN Cj4+IGx4Yy1hdHRhY2ggLW4gYWlvMV9nYWxlcmFfY29udGFpbmVyLTRmNDg4ZjZhDQo+PiBsb29r IGFyb3VuZCBpdCwgY2hlY2sgd2hldGhlciBteXNxbGQgaXMgcnVubmluZyBldGMuLCB0cnkgdG8g aWRlbnRpZnkgd2hpY2gNCj4+IGFuc2libGUgdGFza3MgZmFpbGVkIGFuZCBydW4gdGhlbSBtYW51 YWxseSBpbnNpZGUgb2YgdGhlIGNvbnRhaW5lci4NCj4+IA0KPj4gTWFyY2luDQo+PiANCj4+IA0K Pj4+IE9uIFN1biwgRmViIDQsIDIwMTggYXQgMzo0MSBBTSwgU2F0aXNoIFBhdGVsIDxzYXRpc2gu dHh0QGdtYWlsLmNvbT4gd3JvdGU6DQo+Pj4gDQo+Pj4gSSBoYXZlIG5vdGljZWQgaW4gb3V0cHV0 ICJhaW8xX2dhbGVyYV9jb250YWluZXIiIGlzIGZhaWxlZCwgaG93IGRvIGkNCj4+PiBmaXhlZCB0 aGlzIGtpbmQgb2YgaXNzdWU/DQo+Pj4gDQo+Pj4gDQo+Pj4gDQo+Pj4gUExBWSBSRUNBUA0KPj4+ ICoqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq DQo+Pj4gYWlvMSAgICAgICAgICAgICAgICAgICAgICAgOiBvaz00MSAgIGNoYW5nZWQ9NCAgICB1 bnJlYWNoYWJsZT0wDQo+Pj4gZmFpbGVkPTANCj4+PiBhaW8xX2NpbmRlcl9hcGlfY29udGFpbmVy LTJhZjRkZDAxIDogb2s9MCAgICBjaGFuZ2VkPTANCj4+PiB1bnJlYWNoYWJsZT0wICAgIGZhaWxl ZD0wDQo+Pj4gYWlvMV9jaW5kZXJfc2NoZWR1bGVyX2NvbnRhaW5lci00NTRkYjFmYiA6IG9rPTAg ICAgY2hhbmdlZD0wDQo+Pj4gdW5yZWFjaGFibGU9MCAgICBmYWlsZWQ9MA0KPj4+IGFpbzFfZGVz aWduYXRlX2NvbnRhaW5lci1mN2VhM2Y3MyA6IG9rPTAgICAgY2hhbmdlZD0wICAgIHVucmVhY2hh YmxlPTANCj4+PiAgIGZhaWxlZD0wDQo+Pj4gYWlvMV9nYWxlcmFfY29udGFpbmVyLTRmNDg4ZjZh IDogb2s9MzIgICBjaGFuZ2VkPTMgICAgdW5yZWFjaGFibGU9MA0KPj4+IGZhaWxlZD0xDQo+Pj4g DQo+Pj4+IE9uIFNhdCwgRmViIDMsIDIwMTggYXQgOToyNiBQTSwgU2F0aXNoIFBhdGVsIDxzYXRp c2gudHh0QGdtYWlsLmNvbT4gd3JvdGU6DQo+Pj4+IEkgaGF2ZSByZS1pbnN0YWxsIGNlbnRvczcg YW5kIGdpdmUgaXQgYSB0cnkgYW5kIGdvdCB0aGlzIGVycm9yDQo+Pj4+IA0KPj4+PiBERUJVRyBN RVNTQUdFIFJFQ0FQDQo+Pj4+ICoqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqKg0KPj4+PiBERUJVRzogW0xvYWQgbG9jYWwgcGFja2FnZXNd DQo+Pj4+ICoqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq Kg0KPj4+PiBBbGwgaXRlbXMgY29tcGxldGVkDQo+Pj4+IA0KPj4+PiBTYXR1cmRheSAwMyBGZWJy dWFyeSAyMDE4ICAyMTowNDowNyAtMDUwMCAoMDowMDowNC4xNzUpDQo+Pj4+IDA6MTY6MTcuMjA0 ICoqKioqDQo+Pj4+IA0KPj4+PiA9PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09DQo+Pj4+IHJlcG9fYnVp bGQgOiBDcmVhdGUgT3BlblN0YWNrLUFuc2libGUgcmVxdWlyZW1lbnQgd2hlZWxzIC0tLS0tLS0t LS0tLS0tDQo+Pj4+IDI2OC4xNnMNCj4+Pj4gcmVwb19idWlsZCA6IFdhaXQgZm9yIHRoZSB2ZW52 cyBidWlsZHMgdG8gY29tcGxldGUgLS0tLS0tLS0tLS0tLS0tLS0tLS0NCj4+Pj4gMTEwLjMwcw0K Pj4+PiByZXBvX2J1aWxkIDogSW5zdGFsbCBwYWNrYWdlcyAtLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0NCj4+Pj4gNjguMjZzDQo+Pj4+IHJlcG9fYnVpbGQgOiBDbG9u ZSBnaXQgcmVwb3NpdG9yaWVzIGFzeW5jaHJvbm91c2x5IC0tLS0tLS0tLS0tLS0tLS0tLS0tLQ0K Pj4+PiA1OS44NXMNCj4+Pj4gcGlwX2luc3RhbGwgOiBJbnN0YWxsIGRpc3RybyBwYWNrYWdlcyAt LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tDQo+Pj4+IDM2Ljcycw0KPj4+PiBnYWxl cmFfY2xpZW50IDogSW5zdGFsbCBnYWxlcmEgZGlzdHJvIHBhY2thZ2VzIC0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0NCj4+Pj4gMzMuMjFzDQo+Pj4+IGhhcHJveHlfc2VydmVyIDogQ3JlYXRlIGhh cHJveHkgc2VydmljZSBjb25maWcgZmlsZXMgLS0tLS0tLS0tLS0tLS0tLS0tLQ0KPj4+PiAzMC44 MXMNCj4+Pj4gcmVwb19idWlsZCA6IEV4ZWN1dGUgdGhlIHZlbnYgYnVpbGQgc2NyaXB0cyBhc3lu Y2hvbm91c2x5IC0tLS0tLS0tLS0tLS0tDQo+Pj4+IDI5LjY5cw0KPj4+PiBwaXBfaW5zdGFsbCA6 IEluc3RhbGwgZGlzdHJvIHBhY2thZ2VzIC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0NCj4+Pj4gMjMuNTZzDQo+Pj4+IHJlcG9fc2VydmVyIDogSW5zdGFsbCByZXBvIHNlcnZlciBw YWNrYWdlcyAtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQ0KPj4+PiAyMC4xMXMNCj4+Pj4g bWVtY2FjaGVkX3NlcnZlciA6IEluc3RhbGwgZGlzdHJvIHBhY2thZ2VzIC0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tDQo+Pj4+IDE2LjM1cw0KPj4+PiByZXBvX2J1aWxkIDogQ3JlYXRlIHZl bnYgYnVpbGQgb3B0aW9ucyBmaWxlcyAtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0NCj4+Pj4g MTQuNTdzDQo+Pj4+IGhhcHJveHlfc2VydmVyIDogSW5zdGFsbCBIQVByb3h5IFBhY2thZ2VzDQo+ Pj4+IC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0gOC4zNXMNCj4+Pj4gcnN5c2xvZ19j bGllbnQgOiBJbnN0YWxsIHJzeXNsb2cgcGFja2FnZXMNCj4+Pj4gLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLSA4LjMzcw0KPj4+PiByc3lzbG9nX2NsaWVudCA6IEluc3RhbGwgcnN5c2xv ZyBwYWNrYWdlcw0KPj4+PiAtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tIDcuNjRzDQo+ Pj4+IHJzeXNsb2dfY2xpZW50IDogSW5zdGFsbCByc3lzbG9nIHBhY2thZ2VzDQo+Pj4+IC0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0gNy40MnMNCj4+Pj4gcmVwb19idWlsZCA6IFdhaXQg Zm9yIGdpdCBjbG9uZXMgdG8gY29tcGxldGUNCj4+Pj4gLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLSA3LjI1cw0KPj4+PiByZXBvX3NlcnZlciA6IEluc3RhbGwgcmVwbyBjYWNoaW5nIHNlcnZl ciBwYWNrYWdlcw0KPj4+PiAtLS0tLS0tLS0tLS0tLS0tLS0tLS0tIDQuNzZzDQo+Pj4+IGdhbGVy YV9zZXJ2ZXIgOiBDaGVjayB0aGF0IFdTUkVQIGlzIHJlYWR5DQo+Pj4+IC0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0gNC4xOHMNCj4+Pj4gcmVwb19zZXJ2ZXIgOiBHaXQgc2VydmljZSBk YXRhIGZvbGRlciBzZXR1cA0KPj4+PiAtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLSA0LjA0 cw0KPj4+PiArKyBleGl0X2ZhaWwgMzQxIDANCj4+Pj4gKysgc2V0ICt4DQo+Pj4+ICsrIGluZm9f YmxvY2sgJ0Vycm9yIEluZm8gLSAzNDEnIDANCj4+Pj4gKysgZWNobw0KPj4+PiAtLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tDQo+Pj4+IC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0NCj4+Pj4gKysgcHJpbnRfaW5mbyAnRXJyb3IgSW5mbyAt IDM0MScgMA0KPj4+PiArKyBQUk9DX05BTUU9Jy0gWyBFcnJvciBJbmZvIC0gMzQxIDAgXSAtJw0K Pj4+PiArKyBwcmludGYgJ1xuJXMlc1xuJyAnLSBbIEVycm9yIEluZm8gLSAzNDEgMCBdIC0nDQo+ Pj4+IC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tDQo+Pj4+IA0K Pj4+PiAtIFsgRXJyb3IgSW5mbyAtIDM0MSAwIF0gLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tDQo+Pj4+ICsrIGVjaG8NCj4+Pj4gLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQ0KPj4+ PiAtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tDQo+Pj4+ICsrIGV4aXRfc3RhdGUgMQ0KPj4+PiArKyBzZXQgK3gNCj4+ Pj4gLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLQ0KPj4+PiANCj4+Pj4gLSBbIFJ1biBUaW1lID0gMjAzMCBzZWNvbmRz IHx8IDMzIG1pbnV0ZXMgXSAtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQ0KPj4+PiAtLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tDQo+Pj4+IC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0NCj4+Pj4gDQo+Pj4+IC0gWyBTdGF0dXM6IEZhaWx1 cmUgXSAtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0NCj4+ Pj4gLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLQ0KPj4+PiANCj4+Pj4gDQo+Pj4+IA0KPj4+PiANCj4+Pj4gSSBkb24n dCBrbm93IHdoeSBpdCBmYWlsZWQNCj4+Pj4gDQo+Pj4+IGJ1dCBpIHRyaWVkIGZvbGxvd2luZzoN Cj4+Pj4gDQo+Pj4+IFtyb290QGFpbyB+XSMgbHhjLWxzIC1mDQo+Pj4+IE5BTUUgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgU1RBVEUgICBBVVRPU1RBUlQgR1JPVVBTDQo+ Pj4+ICAgICAgICAgSVBWNCAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICBJUFY2DQo+Pj4+IGFpbzFfY2luZGVyX2FwaV9jb250YWluZXItMmFmNGRkMDEgICAgICAgICAg UlVOTklORyAxICAgICAgICAgb25ib290LA0KPj4+PiBvcGVuc3RhY2sgMTAuMjU1LjI1NS42Miwg MTcyLjI5LjIzOC4yMTAsIDE3Mi4yOS4yNDQuMTUyICAtDQo+Pj4+IGFpbzFfY2luZGVyX3NjaGVk dWxlcl9jb250YWluZXItNDU0ZGIxZmIgICAgUlVOTklORyAxICAgICAgICAgb25ib290LA0KPj4+ PiBvcGVuc3RhY2sgMTAuMjU1LjI1NS4xMTcsIDE3Mi4yOS4yMzkuMTcyICAgICAgICAgICAgICAg ICAtDQo+Pj4+IGFpbzFfZGVzaWduYXRlX2NvbnRhaW5lci1mN2VhM2Y3MyAgICAgICAgICAgUlVO TklORyAxICAgICAgICAgb25ib290LA0KPj4+PiBvcGVuc3RhY2sgMTAuMjU1LjI1NS4yMzUsIDE3 Mi4yOS4yMzkuMTY2ICAgICAgICAgICAgICAgICAtDQo+Pj4+IGFpbzFfZ2FsZXJhX2NvbnRhaW5l ci00ZjQ4OGY2YSAgICAgICAgICAgICAgUlVOTklORyAxICAgICAgICAgb25ib290LA0KPj4+PiBv cGVuc3RhY2sgMTAuMjU1LjI1NS4xOTMsIDE3Mi4yOS4yMzYuNjkgICAgICAgICAgICAgICAgICAt DQo+Pj4+IGFpbzFfZ2xhbmNlX2NvbnRhaW5lci1mOGNhYTllNiAgICAgICAgICAgICAgUlVOTklO RyAxICAgICAgICAgb25ib290LA0KPj4+PiBvcGVuc3RhY2sgMTAuMjU1LjI1NS4yMjUsIDE3Mi4y OS4yMzkuNTIsIDE3Mi4yOS4yNDYuMjUgICAtDQo+Pj4+IGFpbzFfaGVhdF9hcGlfY29udGFpbmVy LTgzMjFhNzYzICAgICAgICAgICAgUlVOTklORyAxICAgICAgICAgb25ib290LA0KPj4+PiBvcGVu c3RhY2sgMTAuMjU1LjI1NS4xMDQsIDE3Mi4yOS4yMzYuMTg2ICAgICAgICAgICAgICAgICAtDQo+ Pj4+IGFpbzFfaGVhdF9hcGlzX2NvbnRhaW5lci0zZjcwYWQ3NCAgICAgICAgICAgUlVOTklORyAx ICAgICAgICAgb25ib290LA0KPj4+PiBvcGVuc3RhY2sgMTAuMjU1LjI1NS4xNjYsIDE3Mi4yOS4y MzkuMTMgICAgICAgICAgICAgICAgICAtDQo+Pj4+IGFpbzFfaGVhdF9lbmdpbmVfY29udGFpbmVy LWExOGU1YTBhICAgICAgICAgUlVOTklORyAxICAgICAgICAgb25ib290LA0KPj4+PiBvcGVuc3Rh Y2sgMTAuMjU1LjI1NS4xMTgsIDE3Mi4yOS4yMzguNyAgICAgICAgICAgICAgICAgICAtDQo+Pj4+ IGFpbzFfaG9yaXpvbl9jb250YWluZXItZTQ5MzI3NWMgICAgICAgICAgICAgUlVOTklORyAxICAg ICAgICAgb25ib290LA0KPj4+PiBvcGVuc3RhY2sgMTAuMjU1LjI1NS45OCwgMTcyLjI5LjIzNy40 MyAgICAgICAgICAgICAgICAgICAtDQo+Pj4+IGFpbzFfa2V5c3RvbmVfY29udGFpbmVyLWMwZTIz ZTE0ICAgICAgICAgICAgUlVOTklORyAxICAgICAgICAgb25ib290LA0KPj4+PiBvcGVuc3RhY2sg MTAuMjU1LjI1NS42MCwgMTcyLjI5LjIzNy4xNjUgICAgICAgICAgICAgICAgICAtDQo+Pj4+IGFp bzFfbWVtY2FjaGVkX2NvbnRhaW5lci1lZjhmZWQ0YyAgICAgICAgICAgUlVOTklORyAxICAgICAg ICAgb25ib290LA0KPj4+PiBvcGVuc3RhY2sgMTAuMjU1LjI1NS4yMTQsIDE3Mi4yOS4yMzguMjEx ICAgICAgICAgICAgICAgICAtDQo+Pj4+IGFpbzFfbmV1dHJvbl9hZ2VudHNfY29udGFpbmVyLTEz MWU5OTZlICAgICAgUlVOTklORyAxICAgICAgICAgb25ib290LA0KPj4+PiBvcGVuc3RhY2sgMTAu MjU1LjI1NS4xNTMsIDE3Mi4yOS4yMzcuMjQ2LCAxNzIuMjkuMjQzLjIyNyAtDQo+Pj4+IGFpbzFf bmV1dHJvbl9zZXJ2ZXJfY29udGFpbmVyLWNjZDY5Mzk0ICAgICAgUlVOTklORyAxICAgICAgICAg b25ib290LA0KPj4+PiBvcGVuc3RhY2sgMTAuMjU1LjI1NS4yNywgMTcyLjI5LjIzNi4xMjkgICAg ICAgICAgICAgICAgICAtDQo+Pj4+IGFpbzFfbm92YV9hcGlfY29udGFpbmVyLTczMjc0MDI0ICAg ICAgICAgICAgUlVOTklORyAxICAgICAgICAgb25ib290LA0KPj4+PiBvcGVuc3RhY2sgMTAuMjU1 LjI1NS40MiwgMTcyLjI5LjIzOC4yMDEgICAgICAgICAgICAgICAgICAtDQo+Pj4+IGFpbzFfbm92 YV9hcGlfbWV0YWRhdGFfY29udGFpbmVyLWExZDMyMjgyICAgUlVOTklORyAxICAgICAgICAgb25i b290LA0KPj4+PiBvcGVuc3RhY2sgMTAuMjU1LjI1NS4yMTgsIDE3Mi4yOS4yMzguMTUzICAgICAg ICAgICAgICAgICAtDQo+Pj4+IGFpbzFfbm92YV9hcGlfb3NfY29tcHV0ZV9jb250YWluZXItNTI3 MjU5NDAgUlVOTklORyAxICAgICAgICAgb25ib290LA0KPj4+PiBvcGVuc3RhY2sgMTAuMjU1LjI1 NS4xMDksIDE3Mi4yOS4yMzYuMTI2ICAgICAgICAgICAgICAgICAtDQo+Pj4+IGFpbzFfbm92YV9h cGlfcGxhY2VtZW50X2NvbnRhaW5lci0wNThlODAzMSAgUlVOTklORyAxICAgICAgICAgb25ib290 LA0KPj4+PiBvcGVuc3RhY2sgMTAuMjU1LjI1NS4yOSwgMTcyLjI5LjIzNi4xNTcgICAgICAgICAg ICAgICAgICAtDQo+Pj4+IGFpbzFfbm92YV9jb25kdWN0b3JfY29udGFpbmVyLTliNmIyMDhjICAg ICAgUlVOTklORyAxICAgICAgICAgb25ib290LA0KPj4+PiBvcGVuc3RhY2sgMTAuMjU1LjI1NS4x OCwgMTcyLjI5LjIzOS45ICAgICAgICAgICAgICAgICAgICAtDQo+Pj4+IGFpbzFfbm92YV9jb25z b2xlX2NvbnRhaW5lci0wZmI4OTk1YyAgICAgICAgUlVOTklORyAxICAgICAgICAgb25ib290LA0K Pj4+PiBvcGVuc3RhY2sgMTAuMjU1LjI1NS40NywgMTcyLjI5LjIzNy4xMjkgICAgICAgICAgICAg ICAgICAtDQo+Pj4+IGFpbzFfbm92YV9zY2hlZHVsZXJfY29udGFpbmVyLThmN2E2NTdhICAgICAg UlVOTklORyAxICAgICAgICAgb25ib290LA0KPj4+PiBvcGVuc3RhY2sgMTAuMjU1LjI1NS4xOTUs IDE3Mi4yOS4yMzguMTEzICAgICAgICAgICAgICAgICAtDQo+Pj4+IGFpbzFfcmFiYml0X21xX2Nv bnRhaW5lci1jMzQ1MGQ2NiAgICAgICAgICAgUlVOTklORyAxICAgICAgICAgb25ib290LA0KPj4+ PiBvcGVuc3RhY2sgMTAuMjU1LjI1NS4xMTEsIDE3Mi4yOS4yMzcuMjAyICAgICAgICAgICAgICAg ICAtDQo+Pj4+IGFpbzFfcmVwb19jb250YWluZXItOGUwN2ZkZWYgICAgICAgICAgICAgICAgUlVO TklORyAxICAgICAgICAgb25ib290LA0KPj4+PiBvcGVuc3RhY2sgMTAuMjU1LjI1NS4xNDEsIDE3 Mi4yOS4yMzkuNzkgICAgICAgICAgICAgICAgICAtDQo+Pj4+IGFpbzFfcnN5c2xvZ19jb250YWlu ZXItYjE5OGZiZTUgICAgICAgICAgICAgUlVOTklORyAxICAgICAgICAgb25ib290LA0KPj4+PiBv cGVuc3RhY2sgMTAuMjU1LjI1NS4xMywgMTcyLjI5LjIzNi4xOTUgICAgICAgICAgICAgICAgICAt DQo+Pj4+IGFpbzFfc3dpZnRfcHJveHlfY29udGFpbmVyLTFhMzUzNmUxICAgICAgICAgUlVOTklO RyAxICAgICAgICAgb25ib290LA0KPj4+PiBvcGVuc3RhY2sgMTAuMjU1LjI1NS4xMDgsIDE3Mi4y OS4yMzcuMzEsIDE3Mi4yOS4yNDQuMjQ4ICAtDQo+Pj4+IGFpbzFfdXRpbGl0eV9jb250YWluZXIt YmQxMDZmMTEgICAgICAgICAgICAgUlVOTklORyAxICAgICAgICAgb25ib290LA0KPj4+PiBvcGVu c3RhY2sgMTAuMjU1LjI1NS41NCwgMTcyLjI5LjIzOS4xMjQgICAgICAgICAgICAgICAgICAtDQo+ Pj4+IFtyb290QGFpbyB+XSMgbHhjLWENCj4+Pj4gbHhjLWF0dGFjaCAgICAgbHhjLWF1dG9zdGFy dA0KPj4+PiBbcm9vdEBhaW8gfl0jIGx4Yy1hdHRhY2ggLW4gYWlvMV91dGlsaXR5X2NvbnRhaW5l ci1iZDEwNmYxMQ0KPj4+PiBbcm9vdEBhaW8xLXV0aWxpdHktY29udGFpbmVyLWJkMTA2ZjExIH5d Iw0KPj4+PiBbcm9vdEBhaW8xLXV0aWxpdHktY29udGFpbmVyLWJkMTA2ZjExIH5dIyBzb3VyY2Ug L3Jvb3Qvb3BlbnJjDQo+Pj4+IFtyb290QGFpbzEtdXRpbGl0eS1jb250YWluZXItYmQxMDZmMTEg fl0jIG9wZW5zdGFjaw0KPj4+PiBvcGVuc3RhY2sgICAgICAgICAgICAgICAgICAgICAgICAgb3Bl bnN0YWNrLWhvc3QtaG9zdGZpbGUtc2V0dXAuc2gNCj4+Pj4gW3Jvb3RAYWlvMS11dGlsaXR5LWNv bnRhaW5lci1iZDEwNmYxMSB+XSMgb3BlbnN0YWNrDQo+Pj4+IG9wZW5zdGFjayAgICAgICAgICAg ICAgICAgICAgICAgICBvcGVuc3RhY2staG9zdC1ob3N0ZmlsZS1zZXR1cC5zaA0KPj4+PiBbcm9v dEBhaW8xLXV0aWxpdHktY29udGFpbmVyLWJkMTA2ZjExIH5dIyBvcGVuc3RhY2sgdXNlciBsaXN0 DQo+Pj4+IEZhaWxlZCB0byBkaXNjb3ZlciBhdmFpbGFibGUgaWRlbnRpdHkgdmVyc2lvbnMgd2hl biBjb250YWN0aW5nDQo+Pj4+IGh0dHA6Ly8xNzIuMjkuMjM2LjEwMDo1MDAwL3YzLiBBdHRlbXB0 aW5nIHRvIHBhcnNlIHZlcnNpb24gZnJvbSBVUkwuDQo+Pj4+IFNlcnZpY2UgVW5hdmFpbGFibGUg KEhUVFAgNTAzKQ0KPj4+PiBbcm9vdEBhaW8xLXV0aWxpdHktY29udGFpbmVyLWJkMTA2ZjExIH5d Iw0KPj4+PiANCj4+Pj4gDQo+Pj4+IG5vdCBzdXJlIHdoYXQgaXMgdGhpcyBlcnJvciA/DQo+Pj4+ IA0KPj4+PiANCj4+Pj4gT24gU2F0LCBGZWIgMywgMjAxOCBhdCA3OjI5IFBNLCBTYXRpc2ggUGF0 ZWwgPHNhdGlzaC50eHRAZ21haWwuY29tPg0KPj4+PiB3cm90ZToNCj4+Pj4+IEkgaGF2ZSB0aXJl ZCBldmVyeXRoaW5nIGJ1dCBkaWRuJ3QgYWJsZSB0byBmaW5kIHNvbHV0aW9uIDooICB3aGF0IGkg YW0NCj4+Pj4+IGRvaW5nIHdyb25nIGhlcmUsIGkgYW0gZm9sbG93aW5nIHRoaXMgaW5zdHJ1Y3Rp b24gYW5kIHBsZWFzZSBsZXQgbWUNCj4+Pj4+IGtub3cgaWYgaSBhbSB3cm9uZw0KPj4+Pj4gDQo+ Pj4+PiANCj4+Pj4+IGh0dHBzOi8vZGV2ZWxvcGVyLnJhY2tzcGFjZS5jb20vYmxvZy9saWZlLXdp dGhvdXQtZGV2c3RhY2stb3BlbnN0YWNrLWRldmVsb3BtZW50LXdpdGgtb3NhLw0KPj4+Pj4gDQo+ Pj4+PiBJIGhhdmUgQ2VudE9TNywgd2l0aCA4IENQVSBhbmQgMTZHQiBtZW1vcnkgd2l0aCAxMDBH QiBkaXNrIHNpemUuDQo+Pj4+PiANCj4+Pj4+IEVycm9yOiBodHRwOi8vcGFzdGUub3BlbnN0YWNr Lm9yZy9zaG93LzY2MDQ5Ny8NCj4+Pj4+IA0KPj4+Pj4gDQo+Pj4+PiBJIGhhdmUgdGlyZWQgZ2F0 ZS1jaGVjay1jb21taXQuc2ggYnV0IHNhbWUgZXJyb3IgOigNCj4+Pj4+IA0KPj4+Pj4gDQo+Pj4+ PiANCj4+Pj4+IE9uIFNhdCwgRmViIDMsIDIwMTggYXQgMToxMSBBTSwgU2F0aXNoIFBhdGVsIDxz YXRpc2gudHh0QGdtYWlsLmNvbT4NCj4+Pj4+IHdyb3RlOg0KPj4+Pj4+IEkgaGF2ZSBzdGFydGVk IHBsYXlpbmcgd2l0aCBvcGVuc3RhY2stYW5zaWJsZSBvbiBDZW50T1M3IGFuZCB0cnlpbmcgdG8N Cj4+Pj4+PiBpbnN0YWxsIEFsbC1pbi1vbmUgYnV0IGdvdCB0aGlzIGVycm9yIGFuZCBub3Qgc3Vy ZSB3aGF0IGNhdXNlIHRoYXQNCj4+Pj4+PiBlcnJvciBob3cgZG8gaSB0cm91Ymxlc2hvb3QgaXQ/ DQo+Pj4+Pj4gDQo+Pj4+Pj4gDQo+Pj4+Pj4gVEFTSyBbYm9vdHN0cmFwLWhvc3QgOiBSZW1vdmUg YW4gZXhpc3RpbmcgcHJpdmF0ZS9wdWJsaWMgc3NoIGtleXMgaWYNCj4+Pj4+PiBvbmUgaXMgbWlz c2luZ10NCj4+Pj4+PiANCj4+Pj4+PiAqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioNCj4+Pj4+PiBza2lwcGluZzog W2xvY2FsaG9zdF0gPT4gKGl0ZW09aWRfcnNhKQ0KPj4+Pj4+IHNraXBwaW5nOiBbbG9jYWxob3N0 XSA9PiAoaXRlbT1pZF9yc2EucHViKQ0KPj4+Pj4+IA0KPj4+Pj4+IFRBU0sgW2Jvb3RzdHJhcC1o b3N0IDogQ3JlYXRlIHNzaCBrZXkgcGFpciBmb3Igcm9vdF0NCj4+Pj4+PiANCj4+Pj4+PiAqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKg0KPj4+Pj4+IG9rOiBb bG9jYWxob3N0XQ0KPj4+Pj4+IA0KPj4+Pj4+IFRBU0sgW2Jvb3RzdHJhcC1ob3N0IDogRmV0Y2gg dGhlIGdlbmVyYXRlZCBwdWJsaWMgc3NoIGtleV0NCj4+Pj4+PiANCj4+Pj4+PiAqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKg0KPj4+Pj4+IGNoYW5nZWQ6IFtsb2NhbGhv c3RdDQo+Pj4+Pj4gDQo+Pj4+Pj4gVEFTSyBbYm9vdHN0cmFwLWhvc3QgOiBFbnN1cmUgcm9vdCdz IG5ldyBwdWJsaWMgc3NoIGtleSBpcyBpbg0KPj4+Pj4+IGF1dGhvcml6ZWRfa2V5c10NCj4+Pj4+ PiANCj4+Pj4+PiAqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioNCj4+Pj4+PiBvazogW2xvY2FsaG9zdF0N Cj4+Pj4+PiANCj4+Pj4+PiBUQVNLIFtib290c3RyYXAtaG9zdCA6IENyZWF0ZSB0aGUgcmVxdWly ZWQgZGVwbG95bWVudCBkaXJlY3Rvcmllc10NCj4+Pj4+PiANCj4+Pj4+PiAqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioNCj4+Pj4+PiBjaGFuZ2VkOiBbbG9jYWxob3N0XSA9PiAoaXRl bT0vZXRjL29wZW5zdGFja19kZXBsb3kpDQo+Pj4+Pj4gY2hhbmdlZDogW2xvY2FsaG9zdF0gPT4g KGl0ZW09L2V0Yy9vcGVuc3RhY2tfZGVwbG95L2NvbmYuZCkNCj4+Pj4+PiBjaGFuZ2VkOiBbbG9j YWxob3N0XSA9PiAoaXRlbT0vZXRjL29wZW5zdGFja19kZXBsb3kvZW52LmQpDQo+Pj4+Pj4gDQo+ Pj4+Pj4gVEFTSyBbYm9vdHN0cmFwLWhvc3QgOiBEZXBsb3kgdXNlciBjb25mLmQgY29uZmlndXJh dGlvbl0NCj4+Pj4+PiANCj4+Pj4+PiAqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqDQo+Pj4+Pj4gZmF0YWw6IFtsb2NhbGhvc3RdOiBGQUlMRUQhID0+IHsibXNnIjog Int7DQo+Pj4+Pj4gY29uZmRfb3ZlcnJpZGVzW2Jvb3RzdHJhcF9ob3N0X3NjZW5hcmlvXSB9fTog J2RpY3Qgb2JqZWN0JyBoYXMgbm8NCj4+Pj4+PiBhdHRyaWJ1dGUgdSdhaW8nIn0NCj4+Pj4+PiAN Cj4+Pj4+PiBSVU5OSU5HIEhBTkRMRVIgW3NzaGQgOiBSZWxvYWQgdGhlIFNTSCBzZXJ2aWNlXQ0K Pj4+Pj4+IA0KPj4+Pj4+ICoqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioNCj4+Pj4+PiAgICAgICAgdG8gcmV0cnksIHVzZTogLS1saW1pdA0KPj4+Pj4+ IEAvb3B0L29wZW5zdGFjay1hbnNpYmxlL3Rlc3RzL2Jvb3RzdHJhcC1haW8ucmV0cnkNCj4+Pj4+ PiANCj4+Pj4+PiBQTEFZIFJFQ0FQDQo+Pj4+Pj4gKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioNCj4+Pj4+PiBsb2NhbGhvc3QgICAgICAgICAgICAgICAgICA6IG9rPTYxICAgY2hhbmdl ZD0zNiAgIHVucmVhY2hhYmxlPTANCj4+Pj4+PiBmYWlsZWQ9Mg0KPj4+Pj4+IA0KPj4+Pj4+IFty b290QGFpbyBvcGVuc3RhY2stYW5zaWJsZV0jDQo+Pj4gDQo+Pj4gX19fX19fX19fX19fX19fX19f X19fX19fX19fX19fX19fX19fX19fX19fX19fX18NCj4+PiBNYWlsaW5nIGxpc3Q6DQo+Pj4gaHR0 cDovL2xpc3RzLm9wZW5zdGFjay5vcmcvY2dpLWJpbi9tYWlsbWFuL2xpc3RpbmZvL29wZW5zdGFj aw0KPj4+IFBvc3QgdG8gICAgIDogb3BlbnN0YWNrQGxpc3RzLm9wZW5zdGFjay5vcmcNCj4+PiBV bnN1YnNjcmliZSA6DQo+Pj4gaHR0cDovL2xpc3RzLm9wZW5zdGFjay5vcmcvY2dpLWJpbi9tYWls bWFuL2xpc3RpbmZvL29wZW5zdGFjaw0KPj4gDQo+PiANCj4gDQo+IF9fX19fX19fX19fX19fX19f X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fDQo+IE1haWxpbmcgbGlzdDogaHR0cDovL2xp c3RzLm9wZW5zdGFjay5vcmcvY2dpLWJpbi9tYWlsbWFuL2xpc3RpbmZvL29wZW5zdGFjaw0KPiBQ b3N0IHRvICAgICA6IG9wZW5zdGFja0BsaXN0cy5vcGVuc3RhY2sub3JnDQo+IFVuc3Vic2NyaWJl IDogaHR0cDovL2xpc3RzLm9wZW5zdGFjay5vcmcvY2dpLWJpbi9tYWlsbWFuL2xpc3RpbmZvL29w ZW5zdGFjaw0K --=_4a0d4f5f19567da41f4fbcb491bb9d5b-- From remo at italy1.com Sun Feb 4 16:21:25 2018 From: remo at italy1.com (remo at italy1.com) Date: Sun, 4 Feb 2018 08:21:25 -0800 Subject: [Openstack] openstack-ansible aio error In-Reply-To: References: Message-ID: <2D5C1208-BA0A-491D-B232-6B0AB3EBC8CF@italy1.com> Content-Type: multipart/alternative; boundary="=_4a0d4f5f19567da41f4fbcb491bb9d5b" --=_4a0d4f5f19567da41f4fbcb491bb9d5b Content-Transfer-Encoding: base64 Content-Type: text/plain; charset=utf-8 V2hhdCBhcmUgeW91IGxvb2tpbmcgZm9yIGhhPyBFdGMuIFRyaXBsZW8gaXMgdGhlIHdheSB0byBn byBmb3IgdGhhdCBwYWNrc3RhY2sgaWYgeW91IHdhbnQgc2ltcGxlIGRlcGxveW1lbnQgYnV0IG5v IGhhIG9mIGNvdXJzZS4gDQoNCj4gSWwgZ2lvcm5vIDA0IGZlYiAyMDE4LCBhbGxlIG9yZSAwNzo1 MywgU2F0aXNoIFBhdGVsIDxzYXRpc2gudHh0QGdtYWlsLmNvbT4gaGEgc2NyaXR0bzoNCj4gDQo+ IEhpIE1hcmNpbiwNCj4gDQo+IFRoYW5rIHlvdSwgaSB3aWxsIHRyeSBvdGhlciBsaW5rLCBhbHNv IGkgYW0gdXNpbmcgQ2VudE9TNyBidXQgYW55d2F5DQo+IG5vdyBxdWVzdGlvbiBpcyBkb2VzIG9w ZW5zdGFjay1hbnNpYmxlIHJlYWR5IGZvciBwcm9kdWN0aW9uIGRlcGxveW1lbnQNCj4gZGVzcGl0 ZSBnYWxlcmEgaXNzdWVzIGFuZCBidWc/DQo+IA0KPiBJZiBpIHdhbnQgdG8gZ28gb24gcHJvZHVj dGlvbiBzaG91bGQgaSB3YWl0IG9yIGZpbmQgb3RoZXIgdG9vbHMgdG8NCj4gZGVwbG95IG9uIHBy b2R1Y3Rpb24/DQo+IA0KPj4gT24gU3VuLCBGZWIgNCwgMjAxOCBhdCA1OjI5IEFNLCBNYXJjaW4g RHVsYWsgPG1hcmNpbi5kdWxha0BnbWFpbC5jb20+IHdyb3RlOg0KPj4gV2hlbiBwbGF5aW5nIHdp dGggb3BlbnN0YWNrLWFuc2libGUgZG8gaXQgaW4gYSB2aXJ0dWFsIHNldHVwIChlLmcuIG5lc3Rl ZA0KPj4gdmlydHVhbGl6YXRpb24gd2l0aCBsaWJ2aXJ0KSBzbyB5b3UgY2FuIHJlcHJvZHVjaWJs eSBicmluZyB1cCB5b3VyDQo+PiBlbnZpcm9ubWVudCBmcm9tIHNjcmF0Y2guDQo+PiBZb3Ugd2ls bCBoYXZlIHRvIGRvIGl0IG11bHRpcGxlIHRpbWVzLg0KPj4gDQo+PiBodHRwczovL2RldmVsb3Bl ci5yYWNrc3BhY2UuY29tL2Jsb2cvbGlmZS13aXRob3V0LWRldnN0YWNrLW9wZW5zdGFjay1kZXZl bG9wbWVudC13aXRoLW9zYS8NCj4+IGlzIG1vcmUgdGhhbiAyIHllYXJzIG9sZC4NCj4+IA0KPj4g VHJ5IHRvIGZvbGxvdw0KPj4gaHR0cHM6Ly9kb2NzLm9wZW5zdGFjay5vcmcvb3BlbnN0YWNrLWFu c2libGUvbGF0ZXN0L2NvbnRyaWJ1dG9yL3F1aWNrc3RhcnQtYWlvLmh0bWwNCj4+IGJ1dCBnaXQg Y2xvbmUgdGhlIGxhdGVzdCBzdGF0ZSBvZiB0aGUgb3BlbnN0YWNrLWFuc2libGUgcmVwby4NCj4+ IFRoZSBhYm92ZSBwYWdlIGhhcyBhIGxpbmsgdGhhdCBjYW4gYmUgdXNlZCB0byBzdWJtaXQgYnVn cyBkaXJlY3RseSB0byB0aGUNCj4+IG9wZW5zdGFjay1hbnNpYmxlIHByb2plY3QgYXQgbGF1bmNo cGFkLg0KPj4gSW4gdGhpcyB3YXkgeW91IG1heSBiZSBhYmxlIHRvIGNsZWFudXAvaW1wcm92ZSB0 aGUgZG9jdW1lbnRhdGlvbiwNCj4+IGFuZCBzaW5jZSB5b3VyIHNldHVwIGlzIHRoZSBzaW1wbGVz dCBwb3NzaWJsZSBvbmUgeW91ciBidWcgcmVwb3J0cyBtYXkgZ2V0DQo+PiBub3RpY2VkIGFuZCBy ZXByb2R1Y2VkIGJ5IHRoZSBkZXZlbG9wZXJzLg0KPj4gV2hhdCBoYXBwZW5zIGlzIHRoYXQgbW9z dCBwZW9wbGUgdHJ5IG9wZW5zdGFjay1hbnNpYmxlLCBkb24ndCByZXBvcnQgYnVncywNCj4+IG9y IHJlcG9ydCB0aGUgYnVncyB3aXRob3V0IHRoZSBpbmZvcm1hdGlvbiBuZWNjZXNhcnkNCj4+IHRv IHJlcHJvZHVjZSB0aGVtLCBhbmQgYWJhbmRvbiB0aGUgd2hvbGUgaWRlYS4NCj4+IA0KPj4gVHJ5 IHRvIHNlYXJjaA0KPj4gaHR0cHM6Ly9idWdzLmxhdW5jaHBhZC5uZXQvb3BlbnN0YWNrLWFuc2li bGUvK2J1Z3M/ZmllbGQuc2VhcmNodGV4dD1nYWxlcmENCj4+IGZvciBpbnNwaXJhdGlvbiBhYm91 dCB3aGF0IHRvIGRvLg0KPj4gQ3VycmVudGx5IHRoZSBnYWxlcmEgc2V0dXAgaW4gb3BlbnN0YWNr LWFuc2libGUsIGVzcGVjaWFsbHkgb24gY2VudG9zNyBzZWVtcw0KPj4gdG8gYmUgdW5kZXJnb2lu ZyBzb21lIGNyaXRpY2FsIGNoYW5nZXMuDQo+PiBFbnRlciB0aGUgZ2FsZXJhIGNvbnRhaW5lcjoN Cj4+IGx4Yy1hdHRhY2ggLW4gYWlvMV9nYWxlcmFfY29udGFpbmVyLTRmNDg4ZjZhDQo+PiBsb29r IGFyb3VuZCBpdCwgY2hlY2sgd2hldGhlciBteXNxbGQgaXMgcnVubmluZyBldGMuLCB0cnkgdG8g aWRlbnRpZnkgd2hpY2gNCj4+IGFuc2libGUgdGFza3MgZmFpbGVkIGFuZCBydW4gdGhlbSBtYW51 YWxseSBpbnNpZGUgb2YgdGhlIGNvbnRhaW5lci4NCj4+IA0KPj4gTWFyY2luDQo+PiANCj4+IA0K Pj4+IE9uIFN1biwgRmViIDQsIDIwMTggYXQgMzo0MSBBTSwgU2F0aXNoIFBhdGVsIDxzYXRpc2gu dHh0QGdtYWlsLmNvbT4gd3JvdGU6DQo+Pj4gDQo+Pj4gSSBoYXZlIG5vdGljZWQgaW4gb3V0cHV0 ICJhaW8xX2dhbGVyYV9jb250YWluZXIiIGlzIGZhaWxlZCwgaG93IGRvIGkNCj4+PiBmaXhlZCB0 aGlzIGtpbmQgb2YgaXNzdWU/DQo+Pj4gDQo+Pj4gDQo+Pj4gDQo+Pj4gUExBWSBSRUNBUA0KPj4+ ICoqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq DQo+Pj4gYWlvMSAgICAgICAgICAgICAgICAgICAgICAgOiBvaz00MSAgIGNoYW5nZWQ9NCAgICB1 bnJlYWNoYWJsZT0wDQo+Pj4gZmFpbGVkPTANCj4+PiBhaW8xX2NpbmRlcl9hcGlfY29udGFpbmVy LTJhZjRkZDAxIDogb2s9MCAgICBjaGFuZ2VkPTANCj4+PiB1bnJlYWNoYWJsZT0wICAgIGZhaWxl ZD0wDQo+Pj4gYWlvMV9jaW5kZXJfc2NoZWR1bGVyX2NvbnRhaW5lci00NTRkYjFmYiA6IG9rPTAg ICAgY2hhbmdlZD0wDQo+Pj4gdW5yZWFjaGFibGU9MCAgICBmYWlsZWQ9MA0KPj4+IGFpbzFfZGVz aWduYXRlX2NvbnRhaW5lci1mN2VhM2Y3MyA6IG9rPTAgICAgY2hhbmdlZD0wICAgIHVucmVhY2hh YmxlPTANCj4+PiAgIGZhaWxlZD0wDQo+Pj4gYWlvMV9nYWxlcmFfY29udGFpbmVyLTRmNDg4ZjZh IDogb2s9MzIgICBjaGFuZ2VkPTMgICAgdW5yZWFjaGFibGU9MA0KPj4+IGZhaWxlZD0xDQo+Pj4g DQo+Pj4+IE9uIFNhdCwgRmViIDMsIDIwMTggYXQgOToyNiBQTSwgU2F0aXNoIFBhdGVsIDxzYXRp c2gudHh0QGdtYWlsLmNvbT4gd3JvdGU6DQo+Pj4+IEkgaGF2ZSByZS1pbnN0YWxsIGNlbnRvczcg YW5kIGdpdmUgaXQgYSB0cnkgYW5kIGdvdCB0aGlzIGVycm9yDQo+Pj4+IA0KPj4+PiBERUJVRyBN RVNTQUdFIFJFQ0FQDQo+Pj4+ICoqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqKg0KPj4+PiBERUJVRzogW0xvYWQgbG9jYWwgcGFja2FnZXNd DQo+Pj4+ICoqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq Kg0KPj4+PiBBbGwgaXRlbXMgY29tcGxldGVkDQo+Pj4+IA0KPj4+PiBTYXR1cmRheSAwMyBGZWJy dWFyeSAyMDE4ICAyMTowNDowNyAtMDUwMCAoMDowMDowNC4xNzUpDQo+Pj4+IDA6MTY6MTcuMjA0 ICoqKioqDQo+Pj4+IA0KPj4+PiA9PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09DQo+Pj4+IHJlcG9fYnVp bGQgOiBDcmVhdGUgT3BlblN0YWNrLUFuc2libGUgcmVxdWlyZW1lbnQgd2hlZWxzIC0tLS0tLS0t LS0tLS0tDQo+Pj4+IDI2OC4xNnMNCj4+Pj4gcmVwb19idWlsZCA6IFdhaXQgZm9yIHRoZSB2ZW52 cyBidWlsZHMgdG8gY29tcGxldGUgLS0tLS0tLS0tLS0tLS0tLS0tLS0NCj4+Pj4gMTEwLjMwcw0K Pj4+PiByZXBvX2J1aWxkIDogSW5zdGFsbCBwYWNrYWdlcyAtLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0NCj4+Pj4gNjguMjZzDQo+Pj4+IHJlcG9fYnVpbGQgOiBDbG9u ZSBnaXQgcmVwb3NpdG9yaWVzIGFzeW5jaHJvbm91c2x5IC0tLS0tLS0tLS0tLS0tLS0tLS0tLQ0K Pj4+PiA1OS44NXMNCj4+Pj4gcGlwX2luc3RhbGwgOiBJbnN0YWxsIGRpc3RybyBwYWNrYWdlcyAt LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tDQo+Pj4+IDM2Ljcycw0KPj4+PiBnYWxl cmFfY2xpZW50IDogSW5zdGFsbCBnYWxlcmEgZGlzdHJvIHBhY2thZ2VzIC0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0NCj4+Pj4gMzMuMjFzDQo+Pj4+IGhhcHJveHlfc2VydmVyIDogQ3JlYXRlIGhh cHJveHkgc2VydmljZSBjb25maWcgZmlsZXMgLS0tLS0tLS0tLS0tLS0tLS0tLQ0KPj4+PiAzMC44 MXMNCj4+Pj4gcmVwb19idWlsZCA6IEV4ZWN1dGUgdGhlIHZlbnYgYnVpbGQgc2NyaXB0cyBhc3lu Y2hvbm91c2x5IC0tLS0tLS0tLS0tLS0tDQo+Pj4+IDI5LjY5cw0KPj4+PiBwaXBfaW5zdGFsbCA6 IEluc3RhbGwgZGlzdHJvIHBhY2thZ2VzIC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0NCj4+Pj4gMjMuNTZzDQo+Pj4+IHJlcG9fc2VydmVyIDogSW5zdGFsbCByZXBvIHNlcnZlciBw YWNrYWdlcyAtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQ0KPj4+PiAyMC4xMXMNCj4+Pj4g bWVtY2FjaGVkX3NlcnZlciA6IEluc3RhbGwgZGlzdHJvIHBhY2thZ2VzIC0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tDQo+Pj4+IDE2LjM1cw0KPj4+PiByZXBvX2J1aWxkIDogQ3JlYXRlIHZl bnYgYnVpbGQgb3B0aW9ucyBmaWxlcyAtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0NCj4+Pj4g MTQuNTdzDQo+Pj4+IGhhcHJveHlfc2VydmVyIDogSW5zdGFsbCBIQVByb3h5IFBhY2thZ2VzDQo+ Pj4+IC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0gOC4zNXMNCj4+Pj4gcnN5c2xvZ19j bGllbnQgOiBJbnN0YWxsIHJzeXNsb2cgcGFja2FnZXMNCj4+Pj4gLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLSA4LjMzcw0KPj4+PiByc3lzbG9nX2NsaWVudCA6IEluc3RhbGwgcnN5c2xv ZyBwYWNrYWdlcw0KPj4+PiAtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tIDcuNjRzDQo+ Pj4+IHJzeXNsb2dfY2xpZW50IDogSW5zdGFsbCByc3lzbG9nIHBhY2thZ2VzDQo+Pj4+IC0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0gNy40MnMNCj4+Pj4gcmVwb19idWlsZCA6IFdhaXQg Zm9yIGdpdCBjbG9uZXMgdG8gY29tcGxldGUNCj4+Pj4gLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLSA3LjI1cw0KPj4+PiByZXBvX3NlcnZlciA6IEluc3RhbGwgcmVwbyBjYWNoaW5nIHNlcnZl ciBwYWNrYWdlcw0KPj4+PiAtLS0tLS0tLS0tLS0tLS0tLS0tLS0tIDQuNzZzDQo+Pj4+IGdhbGVy YV9zZXJ2ZXIgOiBDaGVjayB0aGF0IFdTUkVQIGlzIHJlYWR5DQo+Pj4+IC0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0gNC4xOHMNCj4+Pj4gcmVwb19zZXJ2ZXIgOiBHaXQgc2VydmljZSBk YXRhIGZvbGRlciBzZXR1cA0KPj4+PiAtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLSA0LjA0 cw0KPj4+PiArKyBleGl0X2ZhaWwgMzQxIDANCj4+Pj4gKysgc2V0ICt4DQo+Pj4+ICsrIGluZm9f YmxvY2sgJ0Vycm9yIEluZm8gLSAzNDEnIDANCj4+Pj4gKysgZWNobw0KPj4+PiAtLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tDQo+Pj4+IC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0NCj4+Pj4gKysgcHJpbnRfaW5mbyAnRXJyb3IgSW5mbyAt IDM0MScgMA0KPj4+PiArKyBQUk9DX05BTUU9Jy0gWyBFcnJvciBJbmZvIC0gMzQxIDAgXSAtJw0K Pj4+PiArKyBwcmludGYgJ1xuJXMlc1xuJyAnLSBbIEVycm9yIEluZm8gLSAzNDEgMCBdIC0nDQo+ Pj4+IC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tDQo+Pj4+IA0K Pj4+PiAtIFsgRXJyb3IgSW5mbyAtIDM0MSAwIF0gLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tDQo+Pj4+ICsrIGVjaG8NCj4+Pj4gLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQ0KPj4+ PiAtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tDQo+Pj4+ICsrIGV4aXRfc3RhdGUgMQ0KPj4+PiArKyBzZXQgK3gNCj4+ Pj4gLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLQ0KPj4+PiANCj4+Pj4gLSBbIFJ1biBUaW1lID0gMjAzMCBzZWNvbmRz IHx8IDMzIG1pbnV0ZXMgXSAtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQ0KPj4+PiAtLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tDQo+Pj4+IC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0NCj4+Pj4gDQo+Pj4+IC0gWyBTdGF0dXM6IEZhaWx1 cmUgXSAtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0NCj4+ Pj4gLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLQ0KPj4+PiANCj4+Pj4gDQo+Pj4+IA0KPj4+PiANCj4+Pj4gSSBkb24n dCBrbm93IHdoeSBpdCBmYWlsZWQNCj4+Pj4gDQo+Pj4+IGJ1dCBpIHRyaWVkIGZvbGxvd2luZzoN Cj4+Pj4gDQo+Pj4+IFtyb290QGFpbyB+XSMgbHhjLWxzIC1mDQo+Pj4+IE5BTUUgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgU1RBVEUgICBBVVRPU1RBUlQgR1JPVVBTDQo+ Pj4+ICAgICAgICAgSVBWNCAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICBJUFY2DQo+Pj4+IGFpbzFfY2luZGVyX2FwaV9jb250YWluZXItMmFmNGRkMDEgICAgICAgICAg UlVOTklORyAxICAgICAgICAgb25ib290LA0KPj4+PiBvcGVuc3RhY2sgMTAuMjU1LjI1NS42Miwg MTcyLjI5LjIzOC4yMTAsIDE3Mi4yOS4yNDQuMTUyICAtDQo+Pj4+IGFpbzFfY2luZGVyX3NjaGVk dWxlcl9jb250YWluZXItNDU0ZGIxZmIgICAgUlVOTklORyAxICAgICAgICAgb25ib290LA0KPj4+ PiBvcGVuc3RhY2sgMTAuMjU1LjI1NS4xMTcsIDE3Mi4yOS4yMzkuMTcyICAgICAgICAgICAgICAg ICAtDQo+Pj4+IGFpbzFfZGVzaWduYXRlX2NvbnRhaW5lci1mN2VhM2Y3MyAgICAgICAgICAgUlVO TklORyAxICAgICAgICAgb25ib290LA0KPj4+PiBvcGVuc3RhY2sgMTAuMjU1LjI1NS4yMzUsIDE3 Mi4yOS4yMzkuMTY2ICAgICAgICAgICAgICAgICAtDQo+Pj4+IGFpbzFfZ2FsZXJhX2NvbnRhaW5l ci00ZjQ4OGY2YSAgICAgICAgICAgICAgUlVOTklORyAxICAgICAgICAgb25ib290LA0KPj4+PiBv cGVuc3RhY2sgMTAuMjU1LjI1NS4xOTMsIDE3Mi4yOS4yMzYuNjkgICAgICAgICAgICAgICAgICAt DQo+Pj4+IGFpbzFfZ2xhbmNlX2NvbnRhaW5lci1mOGNhYTllNiAgICAgICAgICAgICAgUlVOTklO RyAxICAgICAgICAgb25ib290LA0KPj4+PiBvcGVuc3RhY2sgMTAuMjU1LjI1NS4yMjUsIDE3Mi4y OS4yMzkuNTIsIDE3Mi4yOS4yNDYuMjUgICAtDQo+Pj4+IGFpbzFfaGVhdF9hcGlfY29udGFpbmVy LTgzMjFhNzYzICAgICAgICAgICAgUlVOTklORyAxICAgICAgICAgb25ib290LA0KPj4+PiBvcGVu c3RhY2sgMTAuMjU1LjI1NS4xMDQsIDE3Mi4yOS4yMzYuMTg2ICAgICAgICAgICAgICAgICAtDQo+ Pj4+IGFpbzFfaGVhdF9hcGlzX2NvbnRhaW5lci0zZjcwYWQ3NCAgICAgICAgICAgUlVOTklORyAx ICAgICAgICAgb25ib290LA0KPj4+PiBvcGVuc3RhY2sgMTAuMjU1LjI1NS4xNjYsIDE3Mi4yOS4y MzkuMTMgICAgICAgICAgICAgICAgICAtDQo+Pj4+IGFpbzFfaGVhdF9lbmdpbmVfY29udGFpbmVy LWExOGU1YTBhICAgICAgICAgUlVOTklORyAxICAgICAgICAgb25ib290LA0KPj4+PiBvcGVuc3Rh Y2sgMTAuMjU1LjI1NS4xMTgsIDE3Mi4yOS4yMzguNyAgICAgICAgICAgICAgICAgICAtDQo+Pj4+ IGFpbzFfaG9yaXpvbl9jb250YWluZXItZTQ5MzI3NWMgICAgICAgICAgICAgUlVOTklORyAxICAg ICAgICAgb25ib290LA0KPj4+PiBvcGVuc3RhY2sgMTAuMjU1LjI1NS45OCwgMTcyLjI5LjIzNy40 MyAgICAgICAgICAgICAgICAgICAtDQo+Pj4+IGFpbzFfa2V5c3RvbmVfY29udGFpbmVyLWMwZTIz ZTE0ICAgICAgICAgICAgUlVOTklORyAxICAgICAgICAgb25ib290LA0KPj4+PiBvcGVuc3RhY2sg MTAuMjU1LjI1NS42MCwgMTcyLjI5LjIzNy4xNjUgICAgICAgICAgICAgICAgICAtDQo+Pj4+IGFp bzFfbWVtY2FjaGVkX2NvbnRhaW5lci1lZjhmZWQ0YyAgICAgICAgICAgUlVOTklORyAxICAgICAg ICAgb25ib290LA0KPj4+PiBvcGVuc3RhY2sgMTAuMjU1LjI1NS4yMTQsIDE3Mi4yOS4yMzguMjEx ICAgICAgICAgICAgICAgICAtDQo+Pj4+IGFpbzFfbmV1dHJvbl9hZ2VudHNfY29udGFpbmVyLTEz MWU5OTZlICAgICAgUlVOTklORyAxICAgICAgICAgb25ib290LA0KPj4+PiBvcGVuc3RhY2sgMTAu MjU1LjI1NS4xNTMsIDE3Mi4yOS4yMzcuMjQ2LCAxNzIuMjkuMjQzLjIyNyAtDQo+Pj4+IGFpbzFf bmV1dHJvbl9zZXJ2ZXJfY29udGFpbmVyLWNjZDY5Mzk0ICAgICAgUlVOTklORyAxICAgICAgICAg b25ib290LA0KPj4+PiBvcGVuc3RhY2sgMTAuMjU1LjI1NS4yNywgMTcyLjI5LjIzNi4xMjkgICAg ICAgICAgICAgICAgICAtDQo+Pj4+IGFpbzFfbm92YV9hcGlfY29udGFpbmVyLTczMjc0MDI0ICAg ICAgICAgICAgUlVOTklORyAxICAgICAgICAgb25ib290LA0KPj4+PiBvcGVuc3RhY2sgMTAuMjU1 LjI1NS40MiwgMTcyLjI5LjIzOC4yMDEgICAgICAgICAgICAgICAgICAtDQo+Pj4+IGFpbzFfbm92 YV9hcGlfbWV0YWRhdGFfY29udGFpbmVyLWExZDMyMjgyICAgUlVOTklORyAxICAgICAgICAgb25i b290LA0KPj4+PiBvcGVuc3RhY2sgMTAuMjU1LjI1NS4yMTgsIDE3Mi4yOS4yMzguMTUzICAgICAg ICAgICAgICAgICAtDQo+Pj4+IGFpbzFfbm92YV9hcGlfb3NfY29tcHV0ZV9jb250YWluZXItNTI3 MjU5NDAgUlVOTklORyAxICAgICAgICAgb25ib290LA0KPj4+PiBvcGVuc3RhY2sgMTAuMjU1LjI1 NS4xMDksIDE3Mi4yOS4yMzYuMTI2ICAgICAgICAgICAgICAgICAtDQo+Pj4+IGFpbzFfbm92YV9h cGlfcGxhY2VtZW50X2NvbnRhaW5lci0wNThlODAzMSAgUlVOTklORyAxICAgICAgICAgb25ib290 LA0KPj4+PiBvcGVuc3RhY2sgMTAuMjU1LjI1NS4yOSwgMTcyLjI5LjIzNi4xNTcgICAgICAgICAg ICAgICAgICAtDQo+Pj4+IGFpbzFfbm92YV9jb25kdWN0b3JfY29udGFpbmVyLTliNmIyMDhjICAg ICAgUlVOTklORyAxICAgICAgICAgb25ib290LA0KPj4+PiBvcGVuc3RhY2sgMTAuMjU1LjI1NS4x OCwgMTcyLjI5LjIzOS45ICAgICAgICAgICAgICAgICAgICAtDQo+Pj4+IGFpbzFfbm92YV9jb25z b2xlX2NvbnRhaW5lci0wZmI4OTk1YyAgICAgICAgUlVOTklORyAxICAgICAgICAgb25ib290LA0K Pj4+PiBvcGVuc3RhY2sgMTAuMjU1LjI1NS40NywgMTcyLjI5LjIzNy4xMjkgICAgICAgICAgICAg ICAgICAtDQo+Pj4+IGFpbzFfbm92YV9zY2hlZHVsZXJfY29udGFpbmVyLThmN2E2NTdhICAgICAg UlVOTklORyAxICAgICAgICAgb25ib290LA0KPj4+PiBvcGVuc3RhY2sgMTAuMjU1LjI1NS4xOTUs IDE3Mi4yOS4yMzguMTEzICAgICAgICAgICAgICAgICAtDQo+Pj4+IGFpbzFfcmFiYml0X21xX2Nv bnRhaW5lci1jMzQ1MGQ2NiAgICAgICAgICAgUlVOTklORyAxICAgICAgICAgb25ib290LA0KPj4+ PiBvcGVuc3RhY2sgMTAuMjU1LjI1NS4xMTEsIDE3Mi4yOS4yMzcuMjAyICAgICAgICAgICAgICAg ICAtDQo+Pj4+IGFpbzFfcmVwb19jb250YWluZXItOGUwN2ZkZWYgICAgICAgICAgICAgICAgUlVO TklORyAxICAgICAgICAgb25ib290LA0KPj4+PiBvcGVuc3RhY2sgMTAuMjU1LjI1NS4xNDEsIDE3 Mi4yOS4yMzkuNzkgICAgICAgICAgICAgICAgICAtDQo+Pj4+IGFpbzFfcnN5c2xvZ19jb250YWlu ZXItYjE5OGZiZTUgICAgICAgICAgICAgUlVOTklORyAxICAgICAgICAgb25ib290LA0KPj4+PiBv cGVuc3RhY2sgMTAuMjU1LjI1NS4xMywgMTcyLjI5LjIzNi4xOTUgICAgICAgICAgICAgICAgICAt DQo+Pj4+IGFpbzFfc3dpZnRfcHJveHlfY29udGFpbmVyLTFhMzUzNmUxICAgICAgICAgUlVOTklO RyAxICAgICAgICAgb25ib290LA0KPj4+PiBvcGVuc3RhY2sgMTAuMjU1LjI1NS4xMDgsIDE3Mi4y OS4yMzcuMzEsIDE3Mi4yOS4yNDQuMjQ4ICAtDQo+Pj4+IGFpbzFfdXRpbGl0eV9jb250YWluZXIt YmQxMDZmMTEgICAgICAgICAgICAgUlVOTklORyAxICAgICAgICAgb25ib290LA0KPj4+PiBvcGVu c3RhY2sgMTAuMjU1LjI1NS41NCwgMTcyLjI5LjIzOS4xMjQgICAgICAgICAgICAgICAgICAtDQo+ Pj4+IFtyb290QGFpbyB+XSMgbHhjLWENCj4+Pj4gbHhjLWF0dGFjaCAgICAgbHhjLWF1dG9zdGFy dA0KPj4+PiBbcm9vdEBhaW8gfl0jIGx4Yy1hdHRhY2ggLW4gYWlvMV91dGlsaXR5X2NvbnRhaW5l ci1iZDEwNmYxMQ0KPj4+PiBbcm9vdEBhaW8xLXV0aWxpdHktY29udGFpbmVyLWJkMTA2ZjExIH5d Iw0KPj4+PiBbcm9vdEBhaW8xLXV0aWxpdHktY29udGFpbmVyLWJkMTA2ZjExIH5dIyBzb3VyY2Ug L3Jvb3Qvb3BlbnJjDQo+Pj4+IFtyb290QGFpbzEtdXRpbGl0eS1jb250YWluZXItYmQxMDZmMTEg fl0jIG9wZW5zdGFjaw0KPj4+PiBvcGVuc3RhY2sgICAgICAgICAgICAgICAgICAgICAgICAgb3Bl bnN0YWNrLWhvc3QtaG9zdGZpbGUtc2V0dXAuc2gNCj4+Pj4gW3Jvb3RAYWlvMS11dGlsaXR5LWNv bnRhaW5lci1iZDEwNmYxMSB+XSMgb3BlbnN0YWNrDQo+Pj4+IG9wZW5zdGFjayAgICAgICAgICAg ICAgICAgICAgICAgICBvcGVuc3RhY2staG9zdC1ob3N0ZmlsZS1zZXR1cC5zaA0KPj4+PiBbcm9v dEBhaW8xLXV0aWxpdHktY29udGFpbmVyLWJkMTA2ZjExIH5dIyBvcGVuc3RhY2sgdXNlciBsaXN0 DQo+Pj4+IEZhaWxlZCB0byBkaXNjb3ZlciBhdmFpbGFibGUgaWRlbnRpdHkgdmVyc2lvbnMgd2hl biBjb250YWN0aW5nDQo+Pj4+IGh0dHA6Ly8xNzIuMjkuMjM2LjEwMDo1MDAwL3YzLiBBdHRlbXB0 aW5nIHRvIHBhcnNlIHZlcnNpb24gZnJvbSBVUkwuDQo+Pj4+IFNlcnZpY2UgVW5hdmFpbGFibGUg KEhUVFAgNTAzKQ0KPj4+PiBbcm9vdEBhaW8xLXV0aWxpdHktY29udGFpbmVyLWJkMTA2ZjExIH5d Iw0KPj4+PiANCj4+Pj4gDQo+Pj4+IG5vdCBzdXJlIHdoYXQgaXMgdGhpcyBlcnJvciA/DQo+Pj4+ IA0KPj4+PiANCj4+Pj4gT24gU2F0LCBGZWIgMywgMjAxOCBhdCA3OjI5IFBNLCBTYXRpc2ggUGF0 ZWwgPHNhdGlzaC50eHRAZ21haWwuY29tPg0KPj4+PiB3cm90ZToNCj4+Pj4+IEkgaGF2ZSB0aXJl ZCBldmVyeXRoaW5nIGJ1dCBkaWRuJ3QgYWJsZSB0byBmaW5kIHNvbHV0aW9uIDooICB3aGF0IGkg YW0NCj4+Pj4+IGRvaW5nIHdyb25nIGhlcmUsIGkgYW0gZm9sbG93aW5nIHRoaXMgaW5zdHJ1Y3Rp b24gYW5kIHBsZWFzZSBsZXQgbWUNCj4+Pj4+IGtub3cgaWYgaSBhbSB3cm9uZw0KPj4+Pj4gDQo+ Pj4+PiANCj4+Pj4+IGh0dHBzOi8vZGV2ZWxvcGVyLnJhY2tzcGFjZS5jb20vYmxvZy9saWZlLXdp dGhvdXQtZGV2c3RhY2stb3BlbnN0YWNrLWRldmVsb3BtZW50LXdpdGgtb3NhLw0KPj4+Pj4gDQo+ Pj4+PiBJIGhhdmUgQ2VudE9TNywgd2l0aCA4IENQVSBhbmQgMTZHQiBtZW1vcnkgd2l0aCAxMDBH QiBkaXNrIHNpemUuDQo+Pj4+PiANCj4+Pj4+IEVycm9yOiBodHRwOi8vcGFzdGUub3BlbnN0YWNr Lm9yZy9zaG93LzY2MDQ5Ny8NCj4+Pj4+IA0KPj4+Pj4gDQo+Pj4+PiBJIGhhdmUgdGlyZWQgZ2F0 ZS1jaGVjay1jb21taXQuc2ggYnV0IHNhbWUgZXJyb3IgOigNCj4+Pj4+IA0KPj4+Pj4gDQo+Pj4+ PiANCj4+Pj4+IE9uIFNhdCwgRmViIDMsIDIwMTggYXQgMToxMSBBTSwgU2F0aXNoIFBhdGVsIDxz YXRpc2gudHh0QGdtYWlsLmNvbT4NCj4+Pj4+IHdyb3RlOg0KPj4+Pj4+IEkgaGF2ZSBzdGFydGVk IHBsYXlpbmcgd2l0aCBvcGVuc3RhY2stYW5zaWJsZSBvbiBDZW50T1M3IGFuZCB0cnlpbmcgdG8N Cj4+Pj4+PiBpbnN0YWxsIEFsbC1pbi1vbmUgYnV0IGdvdCB0aGlzIGVycm9yIGFuZCBub3Qgc3Vy ZSB3aGF0IGNhdXNlIHRoYXQNCj4+Pj4+PiBlcnJvciBob3cgZG8gaSB0cm91Ymxlc2hvb3QgaXQ/ DQo+Pj4+Pj4gDQo+Pj4+Pj4gDQo+Pj4+Pj4gVEFTSyBbYm9vdHN0cmFwLWhvc3QgOiBSZW1vdmUg YW4gZXhpc3RpbmcgcHJpdmF0ZS9wdWJsaWMgc3NoIGtleXMgaWYNCj4+Pj4+PiBvbmUgaXMgbWlz c2luZ10NCj4+Pj4+PiANCj4+Pj4+PiAqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioNCj4+Pj4+PiBza2lwcGluZzog W2xvY2FsaG9zdF0gPT4gKGl0ZW09aWRfcnNhKQ0KPj4+Pj4+IHNraXBwaW5nOiBbbG9jYWxob3N0 XSA9PiAoaXRlbT1pZF9yc2EucHViKQ0KPj4+Pj4+IA0KPj4+Pj4+IFRBU0sgW2Jvb3RzdHJhcC1o b3N0IDogQ3JlYXRlIHNzaCBrZXkgcGFpciBmb3Igcm9vdF0NCj4+Pj4+PiANCj4+Pj4+PiAqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKg0KPj4+Pj4+IG9rOiBb bG9jYWxob3N0XQ0KPj4+Pj4+IA0KPj4+Pj4+IFRBU0sgW2Jvb3RzdHJhcC1ob3N0IDogRmV0Y2gg dGhlIGdlbmVyYXRlZCBwdWJsaWMgc3NoIGtleV0NCj4+Pj4+PiANCj4+Pj4+PiAqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKg0KPj4+Pj4+IGNoYW5nZWQ6IFtsb2NhbGhv c3RdDQo+Pj4+Pj4gDQo+Pj4+Pj4gVEFTSyBbYm9vdHN0cmFwLWhvc3QgOiBFbnN1cmUgcm9vdCdz IG5ldyBwdWJsaWMgc3NoIGtleSBpcyBpbg0KPj4+Pj4+IGF1dGhvcml6ZWRfa2V5c10NCj4+Pj4+ PiANCj4+Pj4+PiAqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioNCj4+Pj4+PiBvazogW2xvY2FsaG9zdF0N Cj4+Pj4+PiANCj4+Pj4+PiBUQVNLIFtib290c3RyYXAtaG9zdCA6IENyZWF0ZSB0aGUgcmVxdWly ZWQgZGVwbG95bWVudCBkaXJlY3Rvcmllc10NCj4+Pj4+PiANCj4+Pj4+PiAqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioNCj4+Pj4+PiBjaGFuZ2VkOiBbbG9jYWxob3N0XSA9PiAoaXRl bT0vZXRjL29wZW5zdGFja19kZXBsb3kpDQo+Pj4+Pj4gY2hhbmdlZDogW2xvY2FsaG9zdF0gPT4g KGl0ZW09L2V0Yy9vcGVuc3RhY2tfZGVwbG95L2NvbmYuZCkNCj4+Pj4+PiBjaGFuZ2VkOiBbbG9j YWxob3N0XSA9PiAoaXRlbT0vZXRjL29wZW5zdGFja19kZXBsb3kvZW52LmQpDQo+Pj4+Pj4gDQo+ Pj4+Pj4gVEFTSyBbYm9vdHN0cmFwLWhvc3QgOiBEZXBsb3kgdXNlciBjb25mLmQgY29uZmlndXJh dGlvbl0NCj4+Pj4+PiANCj4+Pj4+PiAqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqDQo+Pj4+Pj4gZmF0YWw6IFtsb2NhbGhvc3RdOiBGQUlMRUQhID0+IHsibXNnIjog Int7DQo+Pj4+Pj4gY29uZmRfb3ZlcnJpZGVzW2Jvb3RzdHJhcF9ob3N0X3NjZW5hcmlvXSB9fTog J2RpY3Qgb2JqZWN0JyBoYXMgbm8NCj4+Pj4+PiBhdHRyaWJ1dGUgdSdhaW8nIn0NCj4+Pj4+PiAN Cj4+Pj4+PiBSVU5OSU5HIEhBTkRMRVIgW3NzaGQgOiBSZWxvYWQgdGhlIFNTSCBzZXJ2aWNlXQ0K Pj4+Pj4+IA0KPj4+Pj4+ICoqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioNCj4+Pj4+PiAgICAgICAgdG8gcmV0cnksIHVzZTogLS1saW1pdA0KPj4+Pj4+ IEAvb3B0L29wZW5zdGFjay1hbnNpYmxlL3Rlc3RzL2Jvb3RzdHJhcC1haW8ucmV0cnkNCj4+Pj4+ PiANCj4+Pj4+PiBQTEFZIFJFQ0FQDQo+Pj4+Pj4gKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioNCj4+Pj4+PiBsb2NhbGhvc3QgICAgICAgICAgICAgICAgICA6IG9rPTYxICAgY2hhbmdl ZD0zNiAgIHVucmVhY2hhYmxlPTANCj4+Pj4+PiBmYWlsZWQ9Mg0KPj4+Pj4+IA0KPj4+Pj4+IFty b290QGFpbyBvcGVuc3RhY2stYW5zaWJsZV0jDQo+Pj4gDQo+Pj4gX19fX19fX19fX19fX19fX19f X19fX19fX19fX19fX19fX19fX19fX19fX19fX18NCj4+PiBNYWlsaW5nIGxpc3Q6DQo+Pj4gaHR0 cDovL2xpc3RzLm9wZW5zdGFjay5vcmcvY2dpLWJpbi9tYWlsbWFuL2xpc3RpbmZvL29wZW5zdGFj aw0KPj4+IFBvc3QgdG8gICAgIDogb3BlbnN0YWNrQGxpc3RzLm9wZW5zdGFjay5vcmcNCj4+PiBV bnN1YnNjcmliZSA6DQo+Pj4gaHR0cDovL2xpc3RzLm9wZW5zdGFjay5vcmcvY2dpLWJpbi9tYWls bWFuL2xpc3RpbmZvL29wZW5zdGFjaw0KPj4gDQo+PiANCj4gDQo+IF9fX19fX19fX19fX19fX19f X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fDQo+IE1haWxpbmcgbGlzdDogaHR0cDovL2xp c3RzLm9wZW5zdGFjay5vcmcvY2dpLWJpbi9tYWlsbWFuL2xpc3RpbmZvL29wZW5zdGFjaw0KPiBQ b3N0IHRvICAgICA6IG9wZW5zdGFja0BsaXN0cy5vcGVuc3RhY2sub3JnDQo+IFVuc3Vic2NyaWJl IDogaHR0cDovL2xpc3RzLm9wZW5zdGFjay5vcmcvY2dpLWJpbi9tYWlsbWFuL2xpc3RpbmZvL29w ZW5zdGFjaw0K --=_4a0d4f5f19567da41f4fbcb491bb9d5b-- From marcin.dulak at gmail.com Sun Feb 4 18:21:32 2018 From: marcin.dulak at gmail.com (Marcin Dulak) Date: Sun, 4 Feb 2018 19:21:32 +0100 Subject: [Openstack] openstack-ansible aio error In-Reply-To: References: Message-ID: Since you already invested some time, repeat your installation from scratch and submit the bug if necessary, maybe someone will look at it. After that check the other deployment tools, like the just mentioned TripleO. I'm not sure whether openstack-ansible (or any other tool) is production ready - just look at the types of bugs those projects are currently dealing with, but you may be more lucky with an Ubuntu deployment. Marcin On Sun, Feb 4, 2018 at 4:53 PM, Satish Patel wrote: > Hi Marcin, > > Thank you, i will try other link, also i am using CentOS7 but anyway > now question is does openstack-ansible ready for production deployment > despite galera issues and bug? > > If i want to go on production should i wait or find other tools to > deploy on production? > > On Sun, Feb 4, 2018 at 5:29 AM, Marcin Dulak > wrote: > > When playing with openstack-ansible do it in a virtual setup (e.g. nested > > virtualization with libvirt) so you can reproducibly bring up your > > environment from scratch. > > You will have to do it multiple times. > > > > https://developer.rackspace.com/blog/life-without-devstack-openstack- > development-with-osa/ > > is more than 2 years old. > > > > Try to follow > > https://docs.openstack.org/openstack-ansible/latest/ > contributor/quickstart-aio.html > > but git clone the latest state of the openstack-ansible repo. > > The above page has a link that can be used to submit bugs directly to the > > openstack-ansible project at launchpad. > > In this way you may be able to cleanup/improve the documentation, > > and since your setup is the simplest possible one your bug reports may > get > > noticed and reproduced by the developers. > > What happens is that most people try openstack-ansible, don't report > bugs, > > or report the bugs without the information neccesary > > to reproduce them, and abandon the whole idea. > > > > Try to search > > https://bugs.launchpad.net/openstack-ansible/+bugs?field. > searchtext=galera > > for inspiration about what to do. > > Currently the galera setup in openstack-ansible, especially on centos7 > seems > > to be undergoing some critical changes. > > Enter the galera container: > > lxc-attach -n aio1_galera_container-4f488f6a > > look around it, check whether mysqld is running etc., try to identify > which > > ansible tasks failed and run them manually inside of the container. > > > > Marcin > > > > > > On Sun, Feb 4, 2018 at 3:41 AM, Satish Patel > wrote: > >> > >> I have noticed in output "aio1_galera_container" is failed, how do i > >> fixed this kind of issue? > >> > >> > >> > >> PLAY RECAP > >> ************************************************************ > ************************************************************ > ************************************************** > >> aio1 : ok=41 changed=4 unreachable=0 > >> failed=0 > >> aio1_cinder_api_container-2af4dd01 : ok=0 changed=0 > >> unreachable=0 failed=0 > >> aio1_cinder_scheduler_container-454db1fb : ok=0 changed=0 > >> unreachable=0 failed=0 > >> aio1_designate_container-f7ea3f73 : ok=0 changed=0 unreachable=0 > >> failed=0 > >> aio1_galera_container-4f488f6a : ok=32 changed=3 unreachable=0 > >> failed=1 > >> > >> On Sat, Feb 3, 2018 at 9:26 PM, Satish Patel > wrote: > >> > I have re-install centos7 and give it a try and got this error > >> > > >> > DEBUG MESSAGE RECAP > >> > ************************************************************ > >> > DEBUG: [Load local packages] > >> > *************************************************** > >> > All items completed > >> > > >> > Saturday 03 February 2018 21:04:07 -0500 (0:00:04.175) > >> > 0:16:17.204 ***** > >> > > >> > ============================================================ > =================== > >> > repo_build : Create OpenStack-Ansible requirement wheels > -------------- > >> > 268.16s > >> > repo_build : Wait for the venvs builds to complete > -------------------- > >> > 110.30s > >> > repo_build : Install packages ------------------------------ > ------------ > >> > 68.26s > >> > repo_build : Clone git repositories asynchronously > --------------------- > >> > 59.85s > >> > pip_install : Install distro packages ------------------------------ > ---- > >> > 36.72s > >> > galera_client : Install galera distro packages > ------------------------- > >> > 33.21s > >> > haproxy_server : Create haproxy service config files > ------------------- > >> > 30.81s > >> > repo_build : Execute the venv build scripts asynchonously > -------------- > >> > 29.69s > >> > pip_install : Install distro packages ------------------------------ > ---- > >> > 23.56s > >> > repo_server : Install repo server packages > ----------------------------- > >> > 20.11s > >> > memcached_server : Install distro packages > ----------------------------- > >> > 16.35s > >> > repo_build : Create venv build options files > --------------------------- > >> > 14.57s > >> > haproxy_server : Install HAProxy Packages > >> > ------------------------------- 8.35s > >> > rsyslog_client : Install rsyslog packages > >> > ------------------------------- 8.33s > >> > rsyslog_client : Install rsyslog packages > >> > ------------------------------- 7.64s > >> > rsyslog_client : Install rsyslog packages > >> > ------------------------------- 7.42s > >> > repo_build : Wait for git clones to complete > >> > ---------------------------- 7.25s > >> > repo_server : Install repo caching server packages > >> > ---------------------- 4.76s > >> > galera_server : Check that WSREP is ready > >> > ------------------------------- 4.18s > >> > repo_server : Git service data folder setup > >> > ----------------------------- 4.04s > >> > ++ exit_fail 341 0 > >> > ++ set +x > >> > ++ info_block 'Error Info - 341' 0 > >> > ++ echo > >> > ------------------------------------------------------------ > ---------- > >> > ------------------------------------------------------------ > ---------- > >> > ++ print_info 'Error Info - 341' 0 > >> > ++ PROC_NAME='- [ Error Info - 341 0 ] -' > >> > ++ printf '\n%s%s\n' '- [ Error Info - 341 0 ] -' > >> > -------------------------------------------- > >> > > >> > - [ Error Info - 341 0 ] ------------------------------ > --------------- > >> > ++ echo > >> > ------------------------------------------------------------ > ---------- > >> > ------------------------------------------------------------ > ---------- > >> > ++ exit_state 1 > >> > ++ set +x > >> > ------------------------------------------------------------ > ---------- > >> > > >> > - [ Run Time = 2030 seconds || 33 minutes ] -------------------------- > >> > ------------------------------------------------------------ > ---------- > >> > ------------------------------------------------------------ > ---------- > >> > > >> > - [ Status: Failure ] ------------------------------ > ------------------ > >> > ------------------------------------------------------------ > ---------- > >> > > >> > > >> > > >> > > >> > I don't know why it failed > >> > > >> > but i tried following: > >> > > >> > [root at aio ~]# lxc-ls -f > >> > NAME STATE AUTOSTART GROUPS > >> > IPV4 IPV6 > >> > aio1_cinder_api_container-2af4dd01 RUNNING 1 onboot, > >> > openstack 10.255.255.62, 172.29.238.210, 172.29.244.152 - > >> > aio1_cinder_scheduler_container-454db1fb RUNNING 1 onboot, > >> > openstack 10.255.255.117, 172.29.239.172 - > >> > aio1_designate_container-f7ea3f73 RUNNING 1 onboot, > >> > openstack 10.255.255.235, 172.29.239.166 - > >> > aio1_galera_container-4f488f6a RUNNING 1 onboot, > >> > openstack 10.255.255.193, 172.29.236.69 - > >> > aio1_glance_container-f8caa9e6 RUNNING 1 onboot, > >> > openstack 10.255.255.225, 172.29.239.52, 172.29.246.25 - > >> > aio1_heat_api_container-8321a763 RUNNING 1 onboot, > >> > openstack 10.255.255.104, 172.29.236.186 - > >> > aio1_heat_apis_container-3f70ad74 RUNNING 1 onboot, > >> > openstack 10.255.255.166, 172.29.239.13 - > >> > aio1_heat_engine_container-a18e5a0a RUNNING 1 onboot, > >> > openstack 10.255.255.118, 172.29.238.7 - > >> > aio1_horizon_container-e493275c RUNNING 1 onboot, > >> > openstack 10.255.255.98, 172.29.237.43 - > >> > aio1_keystone_container-c0e23e14 RUNNING 1 onboot, > >> > openstack 10.255.255.60, 172.29.237.165 - > >> > aio1_memcached_container-ef8fed4c RUNNING 1 onboot, > >> > openstack 10.255.255.214, 172.29.238.211 - > >> > aio1_neutron_agents_container-131e996e RUNNING 1 onboot, > >> > openstack 10.255.255.153, 172.29.237.246, 172.29.243.227 - > >> > aio1_neutron_server_container-ccd69394 RUNNING 1 onboot, > >> > openstack 10.255.255.27, 172.29.236.129 - > >> > aio1_nova_api_container-73274024 RUNNING 1 onboot, > >> > openstack 10.255.255.42, 172.29.238.201 - > >> > aio1_nova_api_metadata_container-a1d32282 RUNNING 1 onboot, > >> > openstack 10.255.255.218, 172.29.238.153 - > >> > aio1_nova_api_os_compute_container-52725940 RUNNING 1 onboot, > >> > openstack 10.255.255.109, 172.29.236.126 - > >> > aio1_nova_api_placement_container-058e8031 RUNNING 1 onboot, > >> > openstack 10.255.255.29, 172.29.236.157 - > >> > aio1_nova_conductor_container-9b6b208c RUNNING 1 onboot, > >> > openstack 10.255.255.18, 172.29.239.9 - > >> > aio1_nova_console_container-0fb8995c RUNNING 1 onboot, > >> > openstack 10.255.255.47, 172.29.237.129 - > >> > aio1_nova_scheduler_container-8f7a657a RUNNING 1 onboot, > >> > openstack 10.255.255.195, 172.29.238.113 - > >> > aio1_rabbit_mq_container-c3450d66 RUNNING 1 onboot, > >> > openstack 10.255.255.111, 172.29.237.202 - > >> > aio1_repo_container-8e07fdef RUNNING 1 onboot, > >> > openstack 10.255.255.141, 172.29.239.79 - > >> > aio1_rsyslog_container-b198fbe5 RUNNING 1 onboot, > >> > openstack 10.255.255.13, 172.29.236.195 - > >> > aio1_swift_proxy_container-1a3536e1 RUNNING 1 onboot, > >> > openstack 10.255.255.108, 172.29.237.31, 172.29.244.248 - > >> > aio1_utility_container-bd106f11 RUNNING 1 onboot, > >> > openstack 10.255.255.54, 172.29.239.124 - > >> > [root at aio ~]# lxc-a > >> > lxc-attach lxc-autostart > >> > [root at aio ~]# lxc-attach -n aio1_utility_container-bd106f11 > >> > [root at aio1-utility-container-bd106f11 ~]# > >> > [root at aio1-utility-container-bd106f11 ~]# source /root/openrc > >> > [root at aio1-utility-container-bd106f11 ~]# openstack > >> > openstack openstack-host-hostfile-setup.sh > >> > [root at aio1-utility-container-bd106f11 ~]# openstack > >> > openstack openstack-host-hostfile-setup.sh > >> > [root at aio1-utility-container-bd106f11 ~]# openstack user list > >> > Failed to discover available identity versions when contacting > >> > http://172.29.236.100:5000/v3. Attempting to parse version from URL. > >> > Service Unavailable (HTTP 503) > >> > [root at aio1-utility-container-bd106f11 ~]# > >> > > >> > > >> > not sure what is this error ? > >> > > >> > > >> > On Sat, Feb 3, 2018 at 7:29 PM, Satish Patel > >> > wrote: > >> >> I have tired everything but didn't able to find solution :( what i > am > >> >> doing wrong here, i am following this instruction and please let me > >> >> know if i am wrong > >> >> > >> >> > >> >> https://developer.rackspace.com/blog/life-without- > devstack-openstack-development-with-osa/ > >> >> > >> >> I have CentOS7, with 8 CPU and 16GB memory with 100GB disk size. > >> >> > >> >> Error: http://paste.openstack.org/show/660497/ > >> >> > >> >> > >> >> I have tired gate-check-commit.sh but same error :( > >> >> > >> >> > >> >> > >> >> On Sat, Feb 3, 2018 at 1:11 AM, Satish Patel > >> >> wrote: > >> >>> I have started playing with openstack-ansible on CentOS7 and trying > to > >> >>> install All-in-one but got this error and not sure what cause that > >> >>> error how do i troubleshoot it? > >> >>> > >> >>> > >> >>> TASK [bootstrap-host : Remove an existing private/public ssh keys if > >> >>> one is missing] > >> >>> > >> >>> ************************************************************ > ************ > >> >>> skipping: [localhost] => (item=id_rsa) > >> >>> skipping: [localhost] => (item=id_rsa.pub) > >> >>> > >> >>> TASK [bootstrap-host : Create ssh key pair for root] > >> >>> > >> >>> ************************************************************ > ******************************************** > >> >>> ok: [localhost] > >> >>> > >> >>> TASK [bootstrap-host : Fetch the generated public ssh key] > >> >>> > >> >>> ************************************************************ > ************************************** > >> >>> changed: [localhost] > >> >>> > >> >>> TASK [bootstrap-host : Ensure root's new public ssh key is in > >> >>> authorized_keys] > >> >>> > >> >>> ************************************************************ > ****************** > >> >>> ok: [localhost] > >> >>> > >> >>> TASK [bootstrap-host : Create the required deployment directories] > >> >>> > >> >>> ************************************************************ > ****************************** > >> >>> changed: [localhost] => (item=/etc/openstack_deploy) > >> >>> changed: [localhost] => (item=/etc/openstack_deploy/conf.d) > >> >>> changed: [localhost] => (item=/etc/openstack_deploy/env.d) > >> >>> > >> >>> TASK [bootstrap-host : Deploy user conf.d configuration] > >> >>> > >> >>> ************************************************************ > **************************************** > >> >>> fatal: [localhost]: FAILED! => {"msg": "{{ > >> >>> confd_overrides[bootstrap_host_scenario] }}: 'dict object' has no > >> >>> attribute u'aio'"} > >> >>> > >> >>> RUNNING HANDLER [sshd : Reload the SSH service] > >> >>> > >> >>> ************************************************************ > ************************************************* > >> >>> to retry, use: --limit > >> >>> @/opt/openstack-ansible/tests/bootstrap-aio.retry > >> >>> > >> >>> PLAY RECAP > >> >>> ************************************************************ > ************************************************************ > ************************** > >> >>> localhost : ok=61 changed=36 unreachable=0 > >> >>> failed=2 > >> >>> > >> >>> [root at aio openstack-ansible]# > >> > >> _______________________________________________ > >> Mailing list: > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > >> Post to : openstack at lists.openstack.org > >> Unsubscribe : > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From remo at italy1.com Sun Feb 4 18:56:01 2018 From: remo at italy1.com (remo at italy1.com) Date: Sun, 4 Feb 2018 10:56:01 -0800 Subject: [Openstack] openstack-ansible aio error In-Reply-To: References: Message-ID: <049D1DE1-C019-4F9A-ADD5-3CF0D7B82904@italy1.com> Content-Type: multipart/alternative; boundary="=_68d84b4844d99d5e832c62a4fff924b4" --=_68d84b4844d99d5e832c62a4fff924b4 Content-Transfer-Encoding: base64 Content-Type: text/plain; charset=utf-8 SSByZWNhbGwgdGhhdCBhbnNpYmxlIHByb2plY3Qgd29ya2VkIGJldHRlciB3aXRoIHVidW50dSBh bmQgbm90IHdpdGggQ2VudE9TLiANCg0KDQo+IElsIGdpb3JubyAwNCBmZWIgMjAxOCwgYWxsZSBv cmUgMTA6MjEsIE1hcmNpbiBEdWxhayA8bWFyY2luLmR1bGFrQGdtYWlsLmNvbT4gaGEgc2NyaXR0 bzoNCj4gDQo+IFNpbmNlIHlvdSBhbHJlYWR5IGludmVzdGVkIHNvbWUgdGltZSwgcmVwZWF0IHlv dXIgaW5zdGFsbGF0aW9uIGZyb20gc2NyYXRjaCBhbmQgc3VibWl0IHRoZSBidWcgaWYgbmVjZXNz YXJ5LCBtYXliZSBzb21lb25lIHdpbGwgbG9vayBhdCBpdC4NCj4gQWZ0ZXIgdGhhdCBjaGVjayB0 aGUgb3RoZXIgZGVwbG95bWVudCB0b29scywgbGlrZSB0aGUganVzdCBtZW50aW9uZWQgVHJpcGxl Ty4NCj4gSSdtIG5vdCBzdXJlIHdoZXRoZXIgb3BlbnN0YWNrLWFuc2libGUgKG9yIGFueSBvdGhl ciB0b29sKSBpcyBwcm9kdWN0aW9uIHJlYWR5IC0ganVzdCBsb29rIGF0IHRoZSB0eXBlcyBvZiBi dWdzIHRob3NlIHByb2plY3RzIGFyZSBjdXJyZW50bHkgZGVhbGluZyB3aXRoLA0KPiBidXQgeW91 IG1heSBiZSBtb3JlIGx1Y2t5IHdpdGggYW4gVWJ1bnR1IGRlcGxveW1lbnQuDQo+IA0KPiBNYXJj aW4NCj4gDQo+PiBPbiBTdW4sIEZlYiA0LCAyMDE4IGF0IDQ6NTMgUE0sIFNhdGlzaCBQYXRlbCA8 c2F0aXNoLnR4dEBnbWFpbC5jb20+IHdyb3RlOg0KPj4gSGkgTWFyY2luLA0KPj4gDQo+PiBUaGFu ayB5b3UsIGkgd2lsbCB0cnkgb3RoZXIgbGluaywgYWxzbyBpIGFtIHVzaW5nIENlbnRPUzcgYnV0 IGFueXdheQ0KPj4gbm93IHF1ZXN0aW9uIGlzIGRvZXMgb3BlbnN0YWNrLWFuc2libGUgcmVhZHkg Zm9yIHByb2R1Y3Rpb24gZGVwbG95bWVudA0KPj4gZGVzcGl0ZSBnYWxlcmEgaXNzdWVzIGFuZCBi dWc/DQo+PiANCj4+IElmIGkgd2FudCB0byBnbyBvbiBwcm9kdWN0aW9uIHNob3VsZCBpIHdhaXQg b3IgZmluZCBvdGhlciB0b29scyB0bw0KPj4gZGVwbG95IG9uIHByb2R1Y3Rpb24/DQo+PiANCj4+ IE9uIFN1biwgRmViIDQsIDIwMTggYXQgNToyOSBBTSwgTWFyY2luIER1bGFrIDxtYXJjaW4uZHVs YWtAZ21haWwuY29tPiB3cm90ZToNCj4+ID4gV2hlbiBwbGF5aW5nIHdpdGggb3BlbnN0YWNrLWFu c2libGUgZG8gaXQgaW4gYSB2aXJ0dWFsIHNldHVwIChlLmcuIG5lc3RlZA0KPj4gPiB2aXJ0dWFs aXphdGlvbiB3aXRoIGxpYnZpcnQpIHNvIHlvdSBjYW4gcmVwcm9kdWNpYmx5IGJyaW5nIHVwIHlv dXINCj4+ID4gZW52aXJvbm1lbnQgZnJvbSBzY3JhdGNoLg0KPj4gPiBZb3Ugd2lsbCBoYXZlIHRv IGRvIGl0IG11bHRpcGxlIHRpbWVzLg0KPj4gPg0KPj4gPiBodHRwczovL2RldmVsb3Blci5yYWNr c3BhY2UuY29tL2Jsb2cvbGlmZS13aXRob3V0LWRldnN0YWNrLW9wZW5zdGFjay1kZXZlbG9wbWVu dC13aXRoLW9zYS8NCj4+ID4gaXMgbW9yZSB0aGFuIDIgeWVhcnMgb2xkLg0KPj4gPg0KPj4gPiBU cnkgdG8gZm9sbG93DQo+PiA+IGh0dHBzOi8vZG9jcy5vcGVuc3RhY2sub3JnL29wZW5zdGFjay1h bnNpYmxlL2xhdGVzdC9jb250cmlidXRvci9xdWlja3N0YXJ0LWFpby5odG1sDQo+PiA+IGJ1dCBn aXQgY2xvbmUgdGhlIGxhdGVzdCBzdGF0ZSBvZiB0aGUgb3BlbnN0YWNrLWFuc2libGUgcmVwby4N Cj4+ID4gVGhlIGFib3ZlIHBhZ2UgaGFzIGEgbGluayB0aGF0IGNhbiBiZSB1c2VkIHRvIHN1Ym1p dCBidWdzIGRpcmVjdGx5IHRvIHRoZQ0KPj4gPiBvcGVuc3RhY2stYW5zaWJsZSBwcm9qZWN0IGF0 IGxhdW5jaHBhZC4NCj4+ID4gSW4gdGhpcyB3YXkgeW91IG1heSBiZSBhYmxlIHRvIGNsZWFudXAv aW1wcm92ZSB0aGUgZG9jdW1lbnRhdGlvbiwNCj4+ID4gYW5kIHNpbmNlIHlvdXIgc2V0dXAgaXMg dGhlIHNpbXBsZXN0IHBvc3NpYmxlIG9uZSB5b3VyIGJ1ZyByZXBvcnRzIG1heSBnZXQNCj4+ID4g bm90aWNlZCBhbmQgcmVwcm9kdWNlZCBieSB0aGUgZGV2ZWxvcGVycy4NCj4+ID4gV2hhdCBoYXBw ZW5zIGlzIHRoYXQgbW9zdCBwZW9wbGUgdHJ5IG9wZW5zdGFjay1hbnNpYmxlLCBkb24ndCByZXBv cnQgYnVncywNCj4+ID4gb3IgcmVwb3J0IHRoZSBidWdzIHdpdGhvdXQgdGhlIGluZm9ybWF0aW9u IG5lY2Nlc2FyeQ0KPj4gPiB0byByZXByb2R1Y2UgdGhlbSwgYW5kIGFiYW5kb24gdGhlIHdob2xl IGlkZWEuDQo+PiA+DQo+PiA+IFRyeSB0byBzZWFyY2gNCj4+ID4gaHR0cHM6Ly9idWdzLmxhdW5j aHBhZC5uZXQvb3BlbnN0YWNrLWFuc2libGUvK2J1Z3M/ZmllbGQuc2VhcmNodGV4dD1nYWxlcmEN Cj4+ID4gZm9yIGluc3BpcmF0aW9uIGFib3V0IHdoYXQgdG8gZG8uDQo+PiA+IEN1cnJlbnRseSB0 aGUgZ2FsZXJhIHNldHVwIGluIG9wZW5zdGFjay1hbnNpYmxlLCBlc3BlY2lhbGx5IG9uIGNlbnRv czcgc2VlbXMNCj4+ID4gdG8gYmUgdW5kZXJnb2luZyBzb21lIGNyaXRpY2FsIGNoYW5nZXMuDQo+ PiA+IEVudGVyIHRoZSBnYWxlcmEgY29udGFpbmVyOg0KPj4gPiBseGMtYXR0YWNoIC1uIGFpbzFf Z2FsZXJhX2NvbnRhaW5lci00ZjQ4OGY2YQ0KPj4gPiBsb29rIGFyb3VuZCBpdCwgY2hlY2sgd2hl dGhlciBteXNxbGQgaXMgcnVubmluZyBldGMuLCB0cnkgdG8gaWRlbnRpZnkgd2hpY2gNCj4+ID4g YW5zaWJsZSB0YXNrcyBmYWlsZWQgYW5kIHJ1biB0aGVtIG1hbnVhbGx5IGluc2lkZSBvZiB0aGUg Y29udGFpbmVyLg0KPj4gPg0KPj4gPiBNYXJjaW4NCj4+ID4NCj4+ID4NCj4+ID4gT24gU3VuLCBG ZWIgNCwgMjAxOCBhdCAzOjQxIEFNLCBTYXRpc2ggUGF0ZWwgPHNhdGlzaC50eHRAZ21haWwuY29t PiB3cm90ZToNCj4+ID4+DQo+PiA+PiBJIGhhdmUgbm90aWNlZCBpbiBvdXRwdXQgImFpbzFfZ2Fs ZXJhX2NvbnRhaW5lciIgaXMgZmFpbGVkLCBob3cgZG8gaQ0KPj4gPj4gZml4ZWQgdGhpcyBraW5k IG9mIGlzc3VlPw0KPj4gPj4NCj4+ID4+DQo+PiA+Pg0KPj4gPj4gUExBWSBSRUNBUA0KPj4gPj4g KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioN Cj4+ID4+IGFpbzEgICAgICAgICAgICAgICAgICAgICAgIDogb2s9NDEgICBjaGFuZ2VkPTQgICAg dW5yZWFjaGFibGU9MA0KPj4gPj4gZmFpbGVkPTANCj4+ID4+IGFpbzFfY2luZGVyX2FwaV9jb250 YWluZXItMmFmNGRkMDEgOiBvaz0wICAgIGNoYW5nZWQ9MA0KPj4gPj4gdW5yZWFjaGFibGU9MCAg ICBmYWlsZWQ9MA0KPj4gPj4gYWlvMV9jaW5kZXJfc2NoZWR1bGVyX2NvbnRhaW5lci00NTRkYjFm YiA6IG9rPTAgICAgY2hhbmdlZD0wDQo+PiA+PiB1bnJlYWNoYWJsZT0wICAgIGZhaWxlZD0wDQo+ PiA+PiBhaW8xX2Rlc2lnbmF0ZV9jb250YWluZXItZjdlYTNmNzMgOiBvaz0wICAgIGNoYW5nZWQ9 MCAgICB1bnJlYWNoYWJsZT0wDQo+PiA+PiAgICBmYWlsZWQ9MA0KPj4gPj4gYWlvMV9nYWxlcmFf Y29udGFpbmVyLTRmNDg4ZjZhIDogb2s9MzIgICBjaGFuZ2VkPTMgICAgdW5yZWFjaGFibGU9MA0K Pj4gPj4gZmFpbGVkPTENCj4+ID4+DQo+PiA+PiBPbiBTYXQsIEZlYiAzLCAyMDE4IGF0IDk6MjYg UE0sIFNhdGlzaCBQYXRlbCA8c2F0aXNoLnR4dEBnbWFpbC5jb20+IHdyb3RlOg0KPj4gPj4gPiBJ IGhhdmUgcmUtaW5zdGFsbCBjZW50b3M3IGFuZCBnaXZlIGl0IGEgdHJ5IGFuZCBnb3QgdGhpcyBl cnJvcg0KPj4gPj4gPg0KPj4gPj4gPiBERUJVRyBNRVNTQUdFIFJFQ0FQDQo+PiA+PiA+ICoqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKg0K Pj4gPj4gPiBERUJVRzogW0xvYWQgbG9jYWwgcGFja2FnZXNdDQo+PiA+PiA+ICoqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKg0KPj4gPj4gPiBBbGwgaXRl bXMgY29tcGxldGVkDQo+PiA+PiA+DQo+PiA+PiA+IFNhdHVyZGF5IDAzIEZlYnJ1YXJ5IDIwMTgg IDIxOjA0OjA3IC0wNTAwICgwOjAwOjA0LjE3NSkNCj4+ID4+ID4gMDoxNjoxNy4yMDQgKioqKioN Cj4+ID4+ID4NCj4+ID4+ID4gPT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PQ0KPj4gPj4gPiByZXBvX2J1 aWxkIDogQ3JlYXRlIE9wZW5TdGFjay1BbnNpYmxlIHJlcXVpcmVtZW50IHdoZWVscyAtLS0tLS0t LS0tLS0tLQ0KPj4gPj4gPiAyNjguMTZzDQo+PiA+PiA+IHJlcG9fYnVpbGQgOiBXYWl0IGZvciB0 aGUgdmVudnMgYnVpbGRzIHRvIGNvbXBsZXRlIC0tLS0tLS0tLS0tLS0tLS0tLS0tDQo+PiA+PiA+ IDExMC4zMHMNCj4+ID4+ID4gcmVwb19idWlsZCA6IEluc3RhbGwgcGFja2FnZXMgLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tDQo+PiA+PiA+IDY4LjI2cw0KPj4gPj4g PiByZXBvX2J1aWxkIDogQ2xvbmUgZ2l0IHJlcG9zaXRvcmllcyBhc3luY2hyb25vdXNseSAtLS0t LS0tLS0tLS0tLS0tLS0tLS0NCj4+ID4+ID4gNTkuODVzDQo+PiA+PiA+IHBpcF9pbnN0YWxsIDog SW5zdGFsbCBkaXN0cm8gcGFja2FnZXMgLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LQ0KPj4gPj4gPiAzNi43MnMNCj4+ID4+ID4gZ2FsZXJhX2NsaWVudCA6IEluc3RhbGwgZ2FsZXJh IGRpc3RybyBwYWNrYWdlcyAtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tDQo+PiA+PiA+IDMzLjIx cw0KPj4gPj4gPiBoYXByb3h5X3NlcnZlciA6IENyZWF0ZSBoYXByb3h5IHNlcnZpY2UgY29uZmln IGZpbGVzIC0tLS0tLS0tLS0tLS0tLS0tLS0NCj4+ID4+ID4gMzAuODFzDQo+PiA+PiA+IHJlcG9f YnVpbGQgOiBFeGVjdXRlIHRoZSB2ZW52IGJ1aWxkIHNjcmlwdHMgYXN5bmNob25vdXNseSAtLS0t LS0tLS0tLS0tLQ0KPj4gPj4gPiAyOS42OXMNCj4+ID4+ID4gcGlwX2luc3RhbGwgOiBJbnN0YWxs IGRpc3RybyBwYWNrYWdlcyAtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tDQo+PiA+ PiA+IDIzLjU2cw0KPj4gPj4gPiByZXBvX3NlcnZlciA6IEluc3RhbGwgcmVwbyBzZXJ2ZXIgcGFj a2FnZXMgLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0NCj4+ID4+ID4gMjAuMTFzDQo+PiA+ PiA+IG1lbWNhY2hlZF9zZXJ2ZXIgOiBJbnN0YWxsIGRpc3RybyBwYWNrYWdlcyAtLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLQ0KPj4gPj4gPiAxNi4zNXMNCj4+ID4+ID4gcmVwb19idWlsZCA6 IENyZWF0ZSB2ZW52IGJ1aWxkIG9wdGlvbnMgZmlsZXMgLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tDQo+PiA+PiA+IDE0LjU3cw0KPj4gPj4gPiBoYXByb3h5X3NlcnZlciA6IEluc3RhbGwgSEFQ cm94eSBQYWNrYWdlcw0KPj4gPj4gPiAtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tIDgu MzVzDQo+PiA+PiA+IHJzeXNsb2dfY2xpZW50IDogSW5zdGFsbCByc3lzbG9nIHBhY2thZ2VzDQo+ PiA+PiA+IC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0gOC4zM3MNCj4+ID4+ID4gcnN5 c2xvZ19jbGllbnQgOiBJbnN0YWxsIHJzeXNsb2cgcGFja2FnZXMNCj4+ID4+ID4gLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLSA3LjY0cw0KPj4gPj4gPiByc3lzbG9nX2NsaWVudCA6IElu c3RhbGwgcnN5c2xvZyBwYWNrYWdlcw0KPj4gPj4gPiAtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tIDcuNDJzDQo+PiA+PiA+IHJlcG9fYnVpbGQgOiBXYWl0IGZvciBnaXQgY2xvbmVzIHRv IGNvbXBsZXRlDQo+PiA+PiA+IC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0gNy4yNXMNCj4+ ID4+ID4gcmVwb19zZXJ2ZXIgOiBJbnN0YWxsIHJlcG8gY2FjaGluZyBzZXJ2ZXIgcGFja2FnZXMN Cj4+ID4+ID4gLS0tLS0tLS0tLS0tLS0tLS0tLS0tLSA0Ljc2cw0KPj4gPj4gPiBnYWxlcmFfc2Vy dmVyIDogQ2hlY2sgdGhhdCBXU1JFUCBpcyByZWFkeQ0KPj4gPj4gPiAtLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tIDQuMThzDQo+PiA+PiA+IHJlcG9fc2VydmVyIDogR2l0IHNlcnZpY2Ug ZGF0YSBmb2xkZXIgc2V0dXANCj4+ID4+ID4gLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0g NC4wNHMNCj4+ID4+ID4gKysgZXhpdF9mYWlsIDM0MSAwDQo+PiA+PiA+ICsrIHNldCAreA0KPj4g Pj4gPiArKyBpbmZvX2Jsb2NrICdFcnJvciBJbmZvIC0gMzQxJyAwDQo+PiA+PiA+ICsrIGVjaG8N Cj4+ID4+ID4gLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLQ0KPj4gPj4gPiAtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tDQo+PiA+PiA+ICsr IHByaW50X2luZm8gJ0Vycm9yIEluZm8gLSAzNDEnIDANCj4+ID4+ID4gKysgUFJPQ19OQU1FPSct IFsgRXJyb3IgSW5mbyAtIDM0MSAwIF0gLScNCj4+ID4+ID4gKysgcHJpbnRmICdcbiVzJXNcbicg Jy0gWyBFcnJvciBJbmZvIC0gMzQxIDAgXSAtJw0KPj4gPj4gPiAtLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQ0KPj4gPj4gPg0KPj4gPj4gPiAtIFsgRXJyb3IgSW5m byAtIDM0MSAwIF0gLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t DQo+PiA+PiA+ICsrIGVjaG8NCj4+ID4+ID4gLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQ0KPj4gPj4gPiAtLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tDQo+PiA+PiA+ICsrIGV4aXRfc3RhdGUgMQ0KPj4gPj4gPiArKyBzZXQgK3gNCj4+ID4+ ID4gLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLQ0KPj4gPj4gPg0KPj4gPj4gPiAtIFsgUnVuIFRpbWUgPSAyMDMwIHNl Y29uZHMgfHwgMzMgbWludXRlcyBdIC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tDQo+PiA+PiA+ IC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0NCj4+ID4+ID4gLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQ0KPj4gPj4gPg0KPj4gPj4gPiAt IFsgU3RhdHVzOiBGYWlsdXJlIF0gLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tDQo+PiA+PiA+IC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0NCj4+ID4+ID4NCj4+ID4+ID4NCj4+ ID4+ID4NCj4+ID4+ID4NCj4+ID4+ID4gSSBkb24ndCBrbm93IHdoeSBpdCBmYWlsZWQNCj4+ID4+ ID4NCj4+ID4+ID4gYnV0IGkgdHJpZWQgZm9sbG93aW5nOg0KPj4gPj4gPg0KPj4gPj4gPiBbcm9v dEBhaW8gfl0jIGx4Yy1scyAtZg0KPj4gPj4gPiBOQU1FICAgICAgICAgICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgIFNUQVRFICAgQVVUT1NUQVJUIEdST1VQUw0KPj4gPj4gPiAgICAgICAg ICBJUFY0ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIElQVjYNCj4+ ID4+ID4gYWlvMV9jaW5kZXJfYXBpX2NvbnRhaW5lci0yYWY0ZGQwMSAgICAgICAgICBSVU5OSU5H IDEgICAgICAgICBvbmJvb3QsDQo+PiA+PiA+IG9wZW5zdGFjayAxMC4yNTUuMjU1LjYyLCAxNzIu MjkuMjM4LjIxMCwgMTcyLjI5LjI0NC4xNTIgIC0NCj4+ID4+ID4gYWlvMV9jaW5kZXJfc2NoZWR1 bGVyX2NvbnRhaW5lci00NTRkYjFmYiAgICBSVU5OSU5HIDEgICAgICAgICBvbmJvb3QsDQo+PiA+ PiA+IG9wZW5zdGFjayAxMC4yNTUuMjU1LjExNywgMTcyLjI5LjIzOS4xNzIgICAgICAgICAgICAg ICAgIC0NCj4+ID4+ID4gYWlvMV9kZXNpZ25hdGVfY29udGFpbmVyLWY3ZWEzZjczICAgICAgICAg ICBSVU5OSU5HIDEgICAgICAgICBvbmJvb3QsDQo+PiA+PiA+IG9wZW5zdGFjayAxMC4yNTUuMjU1 LjIzNSwgMTcyLjI5LjIzOS4xNjYgICAgICAgICAgICAgICAgIC0NCj4+ID4+ID4gYWlvMV9nYWxl cmFfY29udGFpbmVyLTRmNDg4ZjZhICAgICAgICAgICAgICBSVU5OSU5HIDEgICAgICAgICBvbmJv b3QsDQo+PiA+PiA+IG9wZW5zdGFjayAxMC4yNTUuMjU1LjE5MywgMTcyLjI5LjIzNi42OSAgICAg ICAgICAgICAgICAgIC0NCj4+ID4+ID4gYWlvMV9nbGFuY2VfY29udGFpbmVyLWY4Y2FhOWU2ICAg ICAgICAgICAgICBSVU5OSU5HIDEgICAgICAgICBvbmJvb3QsDQo+PiA+PiA+IG9wZW5zdGFjayAx MC4yNTUuMjU1LjIyNSwgMTcyLjI5LjIzOS41MiwgMTcyLjI5LjI0Ni4yNSAgIC0NCj4+ID4+ID4g YWlvMV9oZWF0X2FwaV9jb250YWluZXItODMyMWE3NjMgICAgICAgICAgICBSVU5OSU5HIDEgICAg ICAgICBvbmJvb3QsDQo+PiA+PiA+IG9wZW5zdGFjayAxMC4yNTUuMjU1LjEwNCwgMTcyLjI5LjIz Ni4xODYgICAgICAgICAgICAgICAgIC0NCj4+ID4+ID4gYWlvMV9oZWF0X2FwaXNfY29udGFpbmVy LTNmNzBhZDc0ICAgICAgICAgICBSVU5OSU5HIDEgICAgICAgICBvbmJvb3QsDQo+PiA+PiA+IG9w ZW5zdGFjayAxMC4yNTUuMjU1LjE2NiwgMTcyLjI5LjIzOS4xMyAgICAgICAgICAgICAgICAgIC0N Cj4+ID4+ID4gYWlvMV9oZWF0X2VuZ2luZV9jb250YWluZXItYTE4ZTVhMGEgICAgICAgICBSVU5O SU5HIDEgICAgICAgICBvbmJvb3QsDQo+PiA+PiA+IG9wZW5zdGFjayAxMC4yNTUuMjU1LjExOCwg MTcyLjI5LjIzOC43ICAgICAgICAgICAgICAgICAgIC0NCj4+ID4+ID4gYWlvMV9ob3Jpem9uX2Nv bnRhaW5lci1lNDkzMjc1YyAgICAgICAgICAgICBSVU5OSU5HIDEgICAgICAgICBvbmJvb3QsDQo+ PiA+PiA+IG9wZW5zdGFjayAxMC4yNTUuMjU1Ljk4LCAxNzIuMjkuMjM3LjQzICAgICAgICAgICAg ICAgICAgIC0NCj4+ID4+ID4gYWlvMV9rZXlzdG9uZV9jb250YWluZXItYzBlMjNlMTQgICAgICAg ICAgICBSVU5OSU5HIDEgICAgICAgICBvbmJvb3QsDQo+PiA+PiA+IG9wZW5zdGFjayAxMC4yNTUu MjU1LjYwLCAxNzIuMjkuMjM3LjE2NSAgICAgICAgICAgICAgICAgIC0NCj4+ID4+ID4gYWlvMV9t ZW1jYWNoZWRfY29udGFpbmVyLWVmOGZlZDRjICAgICAgICAgICBSVU5OSU5HIDEgICAgICAgICBv bmJvb3QsDQo+PiA+PiA+IG9wZW5zdGFjayAxMC4yNTUuMjU1LjIxNCwgMTcyLjI5LjIzOC4yMTEg ICAgICAgICAgICAgICAgIC0NCj4+ID4+ID4gYWlvMV9uZXV0cm9uX2FnZW50c19jb250YWluZXIt MTMxZTk5NmUgICAgICBSVU5OSU5HIDEgICAgICAgICBvbmJvb3QsDQo+PiA+PiA+IG9wZW5zdGFj ayAxMC4yNTUuMjU1LjE1MywgMTcyLjI5LjIzNy4yNDYsIDE3Mi4yOS4yNDMuMjI3IC0NCj4+ID4+ ID4gYWlvMV9uZXV0cm9uX3NlcnZlcl9jb250YWluZXItY2NkNjkzOTQgICAgICBSVU5OSU5HIDEg ICAgICAgICBvbmJvb3QsDQo+PiA+PiA+IG9wZW5zdGFjayAxMC4yNTUuMjU1LjI3LCAxNzIuMjku MjM2LjEyOSAgICAgICAgICAgICAgICAgIC0NCj4+ID4+ID4gYWlvMV9ub3ZhX2FwaV9jb250YWlu ZXItNzMyNzQwMjQgICAgICAgICAgICBSVU5OSU5HIDEgICAgICAgICBvbmJvb3QsDQo+PiA+PiA+ IG9wZW5zdGFjayAxMC4yNTUuMjU1LjQyLCAxNzIuMjkuMjM4LjIwMSAgICAgICAgICAgICAgICAg IC0NCj4+ID4+ID4gYWlvMV9ub3ZhX2FwaV9tZXRhZGF0YV9jb250YWluZXItYTFkMzIyODIgICBS VU5OSU5HIDEgICAgICAgICBvbmJvb3QsDQo+PiA+PiA+IG9wZW5zdGFjayAxMC4yNTUuMjU1LjIx OCwgMTcyLjI5LjIzOC4xNTMgICAgICAgICAgICAgICAgIC0NCj4+ID4+ID4gYWlvMV9ub3ZhX2Fw aV9vc19jb21wdXRlX2NvbnRhaW5lci01MjcyNTk0MCBSVU5OSU5HIDEgICAgICAgICBvbmJvb3Qs DQo+PiA+PiA+IG9wZW5zdGFjayAxMC4yNTUuMjU1LjEwOSwgMTcyLjI5LjIzNi4xMjYgICAgICAg ICAgICAgICAgIC0NCj4+ID4+ID4gYWlvMV9ub3ZhX2FwaV9wbGFjZW1lbnRfY29udGFpbmVyLTA1 OGU4MDMxICBSVU5OSU5HIDEgICAgICAgICBvbmJvb3QsDQo+PiA+PiA+IG9wZW5zdGFjayAxMC4y NTUuMjU1LjI5LCAxNzIuMjkuMjM2LjE1NyAgICAgICAgICAgICAgICAgIC0NCj4+ID4+ID4gYWlv MV9ub3ZhX2NvbmR1Y3Rvcl9jb250YWluZXItOWI2YjIwOGMgICAgICBSVU5OSU5HIDEgICAgICAg ICBvbmJvb3QsDQo+PiA+PiA+IG9wZW5zdGFjayAxMC4yNTUuMjU1LjE4LCAxNzIuMjkuMjM5Ljkg ICAgICAgICAgICAgICAgICAgIC0NCj4+ID4+ID4gYWlvMV9ub3ZhX2NvbnNvbGVfY29udGFpbmVy LTBmYjg5OTVjICAgICAgICBSVU5OSU5HIDEgICAgICAgICBvbmJvb3QsDQo+PiA+PiA+IG9wZW5z dGFjayAxMC4yNTUuMjU1LjQ3LCAxNzIuMjkuMjM3LjEyOSAgICAgICAgICAgICAgICAgIC0NCj4+ ID4+ID4gYWlvMV9ub3ZhX3NjaGVkdWxlcl9jb250YWluZXItOGY3YTY1N2EgICAgICBSVU5OSU5H IDEgICAgICAgICBvbmJvb3QsDQo+PiA+PiA+IG9wZW5zdGFjayAxMC4yNTUuMjU1LjE5NSwgMTcy LjI5LjIzOC4xMTMgICAgICAgICAgICAgICAgIC0NCj4+ID4+ID4gYWlvMV9yYWJiaXRfbXFfY29u dGFpbmVyLWMzNDUwZDY2ICAgICAgICAgICBSVU5OSU5HIDEgICAgICAgICBvbmJvb3QsDQo+PiA+ PiA+IG9wZW5zdGFjayAxMC4yNTUuMjU1LjExMSwgMTcyLjI5LjIzNy4yMDIgICAgICAgICAgICAg ICAgIC0NCj4+ID4+ID4gYWlvMV9yZXBvX2NvbnRhaW5lci04ZTA3ZmRlZiAgICAgICAgICAgICAg ICBSVU5OSU5HIDEgICAgICAgICBvbmJvb3QsDQo+PiA+PiA+IG9wZW5zdGFjayAxMC4yNTUuMjU1 LjE0MSwgMTcyLjI5LjIzOS43OSAgICAgICAgICAgICAgICAgIC0NCj4+ID4+ID4gYWlvMV9yc3lz bG9nX2NvbnRhaW5lci1iMTk4ZmJlNSAgICAgICAgICAgICBSVU5OSU5HIDEgICAgICAgICBvbmJv b3QsDQo+PiA+PiA+IG9wZW5zdGFjayAxMC4yNTUuMjU1LjEzLCAxNzIuMjkuMjM2LjE5NSAgICAg ICAgICAgICAgICAgIC0NCj4+ID4+ID4gYWlvMV9zd2lmdF9wcm94eV9jb250YWluZXItMWEzNTM2 ZTEgICAgICAgICBSVU5OSU5HIDEgICAgICAgICBvbmJvb3QsDQo+PiA+PiA+IG9wZW5zdGFjayAx MC4yNTUuMjU1LjEwOCwgMTcyLjI5LjIzNy4zMSwgMTcyLjI5LjI0NC4yNDggIC0NCj4+ID4+ID4g YWlvMV91dGlsaXR5X2NvbnRhaW5lci1iZDEwNmYxMSAgICAgICAgICAgICBSVU5OSU5HIDEgICAg ICAgICBvbmJvb3QsDQo+PiA+PiA+IG9wZW5zdGFjayAxMC4yNTUuMjU1LjU0LCAxNzIuMjkuMjM5 LjEyNCAgICAgICAgICAgICAgICAgIC0NCj4+ID4+ID4gW3Jvb3RAYWlvIH5dIyBseGMtYQ0KPj4g Pj4gPiBseGMtYXR0YWNoICAgICBseGMtYXV0b3N0YXJ0DQo+PiA+PiA+IFtyb290QGFpbyB+XSMg bHhjLWF0dGFjaCAtbiBhaW8xX3V0aWxpdHlfY29udGFpbmVyLWJkMTA2ZjExDQo+PiA+PiA+IFty b290QGFpbzEtdXRpbGl0eS1jb250YWluZXItYmQxMDZmMTEgfl0jDQo+PiA+PiA+IFtyb290QGFp bzEtdXRpbGl0eS1jb250YWluZXItYmQxMDZmMTEgfl0jIHNvdXJjZSAvcm9vdC9vcGVucmMNCj4+ ID4+ID4gW3Jvb3RAYWlvMS11dGlsaXR5LWNvbnRhaW5lci1iZDEwNmYxMSB+XSMgb3BlbnN0YWNr DQo+PiA+PiA+IG9wZW5zdGFjayAgICAgICAgICAgICAgICAgICAgICAgICBvcGVuc3RhY2staG9z dC1ob3N0ZmlsZS1zZXR1cC5zaA0KPj4gPj4gPiBbcm9vdEBhaW8xLXV0aWxpdHktY29udGFpbmVy LWJkMTA2ZjExIH5dIyBvcGVuc3RhY2sNCj4+ID4+ID4gb3BlbnN0YWNrICAgICAgICAgICAgICAg ICAgICAgICAgIG9wZW5zdGFjay1ob3N0LWhvc3RmaWxlLXNldHVwLnNoDQo+PiA+PiA+IFtyb290 QGFpbzEtdXRpbGl0eS1jb250YWluZXItYmQxMDZmMTEgfl0jIG9wZW5zdGFjayB1c2VyIGxpc3QN Cj4+ID4+ID4gRmFpbGVkIHRvIGRpc2NvdmVyIGF2YWlsYWJsZSBpZGVudGl0eSB2ZXJzaW9ucyB3 aGVuIGNvbnRhY3RpbmcNCj4+ID4+ID4gaHR0cDovLzE3Mi4yOS4yMzYuMTAwOjUwMDAvdjMuIEF0 dGVtcHRpbmcgdG8gcGFyc2UgdmVyc2lvbiBmcm9tIFVSTC4NCj4+ID4+ID4gU2VydmljZSBVbmF2 YWlsYWJsZSAoSFRUUCA1MDMpDQo+PiA+PiA+IFtyb290QGFpbzEtdXRpbGl0eS1jb250YWluZXIt YmQxMDZmMTEgfl0jDQo+PiA+PiA+DQo+PiA+PiA+DQo+PiA+PiA+IG5vdCBzdXJlIHdoYXQgaXMg dGhpcyBlcnJvciA/DQo+PiA+PiA+DQo+PiA+PiA+DQo+PiA+PiA+IE9uIFNhdCwgRmViIDMsIDIw MTggYXQgNzoyOSBQTSwgU2F0aXNoIFBhdGVsIDxzYXRpc2gudHh0QGdtYWlsLmNvbT4NCj4+ID4+ ID4gd3JvdGU6DQo+PiA+PiA+PiBJIGhhdmUgdGlyZWQgZXZlcnl0aGluZyBidXQgZGlkbid0IGFi bGUgdG8gZmluZCBzb2x1dGlvbiA6KCAgd2hhdCBpIGFtDQo+PiA+PiA+PiBkb2luZyB3cm9uZyBo ZXJlLCBpIGFtIGZvbGxvd2luZyB0aGlzIGluc3RydWN0aW9uIGFuZCBwbGVhc2UgbGV0IG1lDQo+ PiA+PiA+PiBrbm93IGlmIGkgYW0gd3JvbmcNCj4+ID4+ID4+DQo+PiA+PiA+Pg0KPj4gPj4gPj4g aHR0cHM6Ly9kZXZlbG9wZXIucmFja3NwYWNlLmNvbS9ibG9nL2xpZmUtd2l0aG91dC1kZXZzdGFj ay1vcGVuc3RhY2stZGV2ZWxvcG1lbnQtd2l0aC1vc2EvDQo+PiA+PiA+Pg0KPj4gPj4gPj4gSSBo YXZlIENlbnRPUzcsIHdpdGggOCBDUFUgYW5kIDE2R0IgbWVtb3J5IHdpdGggMTAwR0IgZGlzayBz aXplLg0KPj4gPj4gPj4NCj4+ID4+ID4+IEVycm9yOiBodHRwOi8vcGFzdGUub3BlbnN0YWNrLm9y Zy9zaG93LzY2MDQ5Ny8NCj4+ID4+ID4+DQo+PiA+PiA+Pg0KPj4gPj4gPj4gSSBoYXZlIHRpcmVk IGdhdGUtY2hlY2stY29tbWl0LnNoIGJ1dCBzYW1lIGVycm9yIDooDQo+PiA+PiA+Pg0KPj4gPj4g Pj4NCj4+ID4+ID4+DQo+PiA+PiA+PiBPbiBTYXQsIEZlYiAzLCAyMDE4IGF0IDE6MTEgQU0sIFNh dGlzaCBQYXRlbCA8c2F0aXNoLnR4dEBnbWFpbC5jb20+DQo+PiA+PiA+PiB3cm90ZToNCj4+ID4+ ID4+PiBJIGhhdmUgc3RhcnRlZCBwbGF5aW5nIHdpdGggb3BlbnN0YWNrLWFuc2libGUgb24gQ2Vu dE9TNyBhbmQgdHJ5aW5nIHRvDQo+PiA+PiA+Pj4gaW5zdGFsbCBBbGwtaW4tb25lIGJ1dCBnb3Qg dGhpcyBlcnJvciBhbmQgbm90IHN1cmUgd2hhdCBjYXVzZSB0aGF0DQo+PiA+PiA+Pj4gZXJyb3Ig aG93IGRvIGkgdHJvdWJsZXNob290IGl0Pw0KPj4gPj4gPj4+DQo+PiA+PiA+Pj4NCj4+ID4+ID4+ PiBUQVNLIFtib290c3RyYXAtaG9zdCA6IFJlbW92ZSBhbiBleGlzdGluZyBwcml2YXRlL3B1Ymxp YyBzc2gga2V5cyBpZg0KPj4gPj4gPj4+IG9uZSBpcyBtaXNzaW5nXQ0KPj4gPj4gPj4+DQo+PiA+ PiA+Pj4gKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqDQo+PiA+PiA+Pj4gc2tpcHBpbmc6IFtsb2NhbGhvc3RdID0+ IChpdGVtPWlkX3JzYSkNCj4+ID4+ID4+PiBza2lwcGluZzogW2xvY2FsaG9zdF0gPT4gKGl0ZW09 aWRfcnNhLnB1YikNCj4+ID4+ID4+Pg0KPj4gPj4gPj4+IFRBU0sgW2Jvb3RzdHJhcC1ob3N0IDog Q3JlYXRlIHNzaCBrZXkgcGFpciBmb3Igcm9vdF0NCj4+ID4+ID4+Pg0KPj4gPj4gPj4+ICoqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqDQo+PiA+PiA+Pj4gb2s6 IFtsb2NhbGhvc3RdDQo+PiA+PiA+Pj4NCj4+ID4+ID4+PiBUQVNLIFtib290c3RyYXAtaG9zdCA6 IEZldGNoIHRoZSBnZW5lcmF0ZWQgcHVibGljIHNzaCBrZXldDQo+PiA+PiA+Pj4NCj4+ID4+ID4+ PiAqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKg0KPj4gPj4gPj4+IGNo YW5nZWQ6IFtsb2NhbGhvc3RdDQo+PiA+PiA+Pj4NCj4+ID4+ID4+PiBUQVNLIFtib290c3RyYXAt aG9zdCA6IEVuc3VyZSByb290J3MgbmV3IHB1YmxpYyBzc2gga2V5IGlzIGluDQo+PiA+PiA+Pj4g YXV0aG9yaXplZF9rZXlzXQ0KPj4gPj4gPj4+DQo+PiA+PiA+Pj4gKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqDQo+PiA+PiA+Pj4gb2s6IFtsb2NhbGhvc3RdDQo+PiA+PiA+Pj4NCj4+ID4+ID4+PiBUQVNL IFtib290c3RyYXAtaG9zdCA6IENyZWF0ZSB0aGUgcmVxdWlyZWQgZGVwbG95bWVudCBkaXJlY3Rv cmllc10NCj4+ID4+ID4+Pg0KPj4gPj4gPj4+ICoqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKg0KPj4gPj4gPj4+IGNoYW5nZWQ6IFtsb2NhbGhvc3RdID0+IChpdGVtPS9ldGMvb3BlbnN0 YWNrX2RlcGxveSkNCj4+ID4+ID4+PiBjaGFuZ2VkOiBbbG9jYWxob3N0XSA9PiAoaXRlbT0vZXRj L29wZW5zdGFja19kZXBsb3kvY29uZi5kKQ0KPj4gPj4gPj4+IGNoYW5nZWQ6IFtsb2NhbGhvc3Rd ID0+IChpdGVtPS9ldGMvb3BlbnN0YWNrX2RlcGxveS9lbnYuZCkNCj4+ID4+ID4+Pg0KPj4gPj4g Pj4+IFRBU0sgW2Jvb3RzdHJhcC1ob3N0IDogRGVwbG95IHVzZXIgY29uZi5kIGNvbmZpZ3VyYXRp b25dDQo+PiA+PiA+Pj4NCj4+ID4+ID4+PiAqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqDQo+PiA+PiA+Pj4gZmF0YWw6IFtsb2NhbGhvc3RdOiBGQUlMRUQhID0+IHsi bXNnIjogInt7DQo+PiA+PiA+Pj4gY29uZmRfb3ZlcnJpZGVzW2Jvb3RzdHJhcF9ob3N0X3NjZW5h cmlvXSB9fTogJ2RpY3Qgb2JqZWN0JyBoYXMgbm8NCj4+ID4+ID4+PiBhdHRyaWJ1dGUgdSdhaW8n In0NCj4+ID4+ID4+Pg0KPj4gPj4gPj4+IFJVTk5JTkcgSEFORExFUiBbc3NoZCA6IFJlbG9hZCB0 aGUgU1NIIHNlcnZpY2VdDQo+PiA+PiA+Pj4NCj4+ID4+ID4+PiAqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqDQo+PiA+PiA+Pj4gICAgICAgICB0byBy ZXRyeSwgdXNlOiAtLWxpbWl0DQo+PiA+PiA+Pj4gQC9vcHQvb3BlbnN0YWNrLWFuc2libGUvdGVz dHMvYm9vdHN0cmFwLWFpby5yZXRyeQ0KPj4gPj4gPj4+DQo+PiA+PiA+Pj4gUExBWSBSRUNBUA0K Pj4gPj4gPj4+ICoqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqDQo+PiA+PiA+Pj4gbG9j YWxob3N0ICAgICAgICAgICAgICAgICAgOiBvaz02MSAgIGNoYW5nZWQ9MzYgICB1bnJlYWNoYWJs ZT0wDQo+PiA+PiA+Pj4gZmFpbGVkPTINCj4+ID4+ID4+Pg0KPj4gPj4gPj4+IFtyb290QGFpbyBv cGVuc3RhY2stYW5zaWJsZV0jDQo+PiA+Pg0KPj4gPj4gX19fX19fX19fX19fX19fX19fX19fX19f X19fX19fX19fX19fX19fX19fX19fX18NCj4+ID4+IE1haWxpbmcgbGlzdDoNCj4+ID4+IGh0dHA6 Ly9saXN0cy5vcGVuc3RhY2sub3JnL2NnaS1iaW4vbWFpbG1hbi9saXN0aW5mby9vcGVuc3RhY2sN Cj4+ID4+IFBvc3QgdG8gICAgIDogb3BlbnN0YWNrQGxpc3RzLm9wZW5zdGFjay5vcmcNCj4+ID4+ IFVuc3Vic2NyaWJlIDoNCj4+ID4+IGh0dHA6Ly9saXN0cy5vcGVuc3RhY2sub3JnL2NnaS1iaW4v bWFpbG1hbi9saXN0aW5mby9vcGVuc3RhY2sNCj4+ID4NCj4+ID4NCj4gDQo+IF9fX19fX19fX19f X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fDQo+IE1haWxpbmcgbGlzdDogaHR0 cDovL2xpc3RzLm9wZW5zdGFjay5vcmcvY2dpLWJpbi9tYWlsbWFuL2xpc3RpbmZvL29wZW5zdGFj aw0KPiBQb3N0IHRvICAgICA6IG9wZW5zdGFja0BsaXN0cy5vcGVuc3RhY2sub3JnDQo+IFVuc3Vi c2NyaWJlIDogaHR0cDovL2xpc3RzLm9wZW5zdGFjay5vcmcvY2dpLWJpbi9tYWlsbWFuL2xpc3Rp bmZvL29wZW5zdGFjaw0KDQo= --=_68d84b4844d99d5e832c62a4fff924b4 Content-Transfer-Encoding: base64 Content-Type: text/html; charset=utf-8 PGh0bWw+PGhlYWQ+PG1ldGEgaHR0cC1lcXVpdj0iY29udGVudC10eXBlIiBjb250ZW50PSJ0ZXh0 L2h0bWw7IGNoYXJzZXQ9dXRmLTgiPjwvaGVhZD48Ym9keSBkaXI9ImF1dG8iPjxkaXY+PC9kaXY+ PGRpdj5JIHJlY2FsbCB0aGF0IGFuc2libGUgcHJvamVjdCB3b3JrZWQgYmV0dGVyIHdpdGggdWJ1 bnR1IGFuZCBub3Qgd2l0aCBDZW50T1MuJm5ic3A7PC9kaXY+PGRpdj48YnI+PC9kaXY+PGRpdj48 YnI+SWwgZ2lvcm5vIDA0IGZlYiAyMDE4LCBhbGxlIG9yZSAxMDoyMSwgTWFyY2luIER1bGFrICZs dDs8YSBocmVmPSJtYWlsdG86bWFyY2luLmR1bGFrQGdtYWlsLmNvbSI+bWFyY2luLmR1bGFrQGdt YWlsLmNvbTwvYT4mZ3Q7IGhhIHNjcml0dG86PGJyPjxicj48L2Rpdj48YmxvY2txdW90ZSB0eXBl PSJjaXRlIj48ZGl2PjxkaXYgZGlyPSJsdHIiPlNpbmNlIHlvdSBhbHJlYWR5IGludmVzdGVkIHNv bWUgdGltZSwgcmVwZWF0IHlvdXIgaW5zdGFsbGF0aW9uIGZyb20gc2NyYXRjaCBhbmQgc3VibWl0 IHRoZSBidWcgaWYgbmVjZXNzYXJ5LCBtYXliZSBzb21lb25lIHdpbGwgbG9vayBhdCBpdC48ZGl2 PjxkaXYgc3R5bGU9ImNvbG9yOnJnYigzNCwzNCwzNCk7Zm9udC1mYW1pbHk6YXJpYWwsc2Fucy1z ZXJpZjtmb250LXNpemU6c21hbGw7Zm9udC1zdHlsZTpub3JtYWw7Zm9udC12YXJpYW50LWxpZ2F0 dXJlczpub3JtYWw7Zm9udC12YXJpYW50LWNhcHM6bm9ybWFsO2ZvbnQtd2VpZ2h0OjQwMDtsZXR0 ZXItc3BhY2luZzpub3JtYWw7dGV4dC1hbGlnbjpzdGFydDt0ZXh0LWluZGVudDowcHg7dGV4dC10 cmFuc2Zvcm06bm9uZTt3aGl0ZS1zcGFjZTpub3JtYWw7d29yZC1zcGFjaW5nOjBweDtiYWNrZ3Jv dW5kLWNvbG9yOnJnYigyNTUsMjU1LDI1NSk7dGV4dC1kZWNvcmF0aW9uLXN0eWxlOmluaXRpYWw7 dGV4dC1kZWNvcmF0aW9uLWNvbG9yOmluaXRpYWwiPkFmdGVyIHRoYXQgY2hlY2sgdGhlIG90aGVy IGRlcGxveW1lbnQgdG9vbHMsIGxpa2UgdGhlIGp1c3QgbWVudGlvbmVkIFRyaXBsZU8uPC9kaXY+ PGRpdj48ZGl2PkknbSBub3Qgc3VyZSB3aGV0aGVyIG9wZW5zdGFjay1hbnNpYmxlIChvciBhbnkg b3RoZXIgdG9vbCkgaXMgcHJvZHVjdGlvbiByZWFkeSAtIGp1c3QgbG9vayBhdCB0aGUgdHlwZXMg b2YgYnVncyB0aG9zZSBwcm9qZWN0cyBhcmUgY3VycmVudGx5IGRlYWxpbmcgd2l0aCw8L2Rpdj48 ZGl2PmJ1dCB5b3UgbWF5IGJlIG1vcmUgbHVja3kgd2l0aCBhbiBVYnVudHUgZGVwbG95bWVudC48 L2Rpdj48ZGl2Pjxicj48L2Rpdj48ZGl2PjxkaXY+TWFyY2luPC9kaXY+PC9kaXY+PC9kaXY+PC9k aXY+PC9kaXY+PGRpdiBjbGFzcz0iZ21haWxfZXh0cmEiPjxicj48ZGl2IGNsYXNzPSJnbWFpbF9x dW90ZSI+T24gU3VuLCBGZWIgNCwgMjAxOCBhdCA0OjUzIFBNLCBTYXRpc2ggUGF0ZWwgPHNwYW4g ZGlyPSJsdHIiPiZsdDs8YSBocmVmPSJtYWlsdG86c2F0aXNoLnR4dEBnbWFpbC5jb20iIHRhcmdl dD0iX2JsYW5rIj5zYXRpc2gudHh0QGdtYWlsLmNvbTwvYT4mZ3Q7PC9zcGFuPiB3cm90ZTo8YnI+ PGJsb2NrcXVvdGUgY2xhc3M9ImdtYWlsX3F1b3RlIiBzdHlsZT0ibWFyZ2luOjAgMCAwIC44ZXg7 Ym9yZGVyLWxlZnQ6MXB4ICNjY2Mgc29saWQ7cGFkZGluZy1sZWZ0OjFleCI+SGkgTWFyY2luLDxi cj4NCjxicj4NClRoYW5rIHlvdSwgaSB3aWxsIHRyeSBvdGhlciBsaW5rLCBhbHNvIGkgYW0gdXNp bmcgQ2VudE9TNyBidXQgYW55d2F5PGJyPg0Kbm93IHF1ZXN0aW9uIGlzIGRvZXMgb3BlbnN0YWNr LWFuc2libGUgcmVhZHkgZm9yIHByb2R1Y3Rpb24gZGVwbG95bWVudDxicj4NCmRlc3BpdGUgZ2Fs ZXJhIGlzc3VlcyBhbmQgYnVnPzxicj4NCjxicj4NCklmIGkgd2FudCB0byBnbyBvbiBwcm9kdWN0 aW9uIHNob3VsZCBpIHdhaXQgb3IgZmluZCBvdGhlciB0b29scyB0bzxicj4NCmRlcGxveSBvbiBw cm9kdWN0aW9uPzxicj4NCjxkaXYgY2xhc3M9IkhPRW5aYiI+PGRpdiBjbGFzcz0iaDUiPjxicj4N Ck9uIFN1biwgRmViIDQsIDIwMTggYXQgNToyOSBBTSwgTWFyY2luIER1bGFrICZsdDs8YSBocmVm PSJtYWlsdG86bWFyY2luLmR1bGFrQGdtYWlsLmNvbSI+bWFyY2luLmR1bGFrQGdtYWlsLmNvbTwv YT4mZ3Q7IHdyb3RlOjxicj4NCiZndDsgV2hlbiBwbGF5aW5nIHdpdGggb3BlbnN0YWNrLWFuc2li bGUgZG8gaXQgaW4gYSB2aXJ0dWFsIHNldHVwIChlLmcuIG5lc3RlZDxicj4NCiZndDsgdmlydHVh bGl6YXRpb24gd2l0aCBsaWJ2aXJ0KSBzbyB5b3UgY2FuIHJlcHJvZHVjaWJseSBicmluZyB1cCB5 b3VyPGJyPg0KJmd0OyBlbnZpcm9ubWVudCBmcm9tIHNjcmF0Y2guPGJyPg0KJmd0OyBZb3Ugd2ls bCBoYXZlIHRvIGRvIGl0IG11bHRpcGxlIHRpbWVzLjxicj4NCiZndDs8YnI+DQomZ3Q7IDxhIGhy ZWY9Imh0dHBzOi8vZGV2ZWxvcGVyLnJhY2tzcGFjZS5jb20vYmxvZy9saWZlLXdpdGhvdXQtZGV2 c3RhY2stb3BlbnN0YWNrLWRldmVsb3BtZW50LXdpdGgtb3NhLyIgcmVsPSJub3JlZmVycmVyIiB0 YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly9kZXZlbG9wZXIucmFja3NwYWNlLjx3YnI+Y29tL2Jsb2cv bGlmZS13aXRob3V0LTx3YnI+ZGV2c3RhY2stb3BlbnN0YWNrLTx3YnI+ZGV2ZWxvcG1lbnQtd2l0 aC1vc2EvPC9hPjxicj4NCiZndDsgaXMgbW9yZSB0aGFuIDIgeWVhcnMgb2xkLjxicj4NCiZndDs8 YnI+DQomZ3Q7IFRyeSB0byBmb2xsb3c8YnI+DQomZ3Q7IDxhIGhyZWY9Imh0dHBzOi8vZG9jcy5v cGVuc3RhY2sub3JnL29wZW5zdGFjay1hbnNpYmxlL2xhdGVzdC9jb250cmlidXRvci9xdWlja3N0 YXJ0LWFpby5odG1sIiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL2Rv Y3Mub3BlbnN0YWNrLm9yZy88d2JyPm9wZW5zdGFjay1hbnNpYmxlL2xhdGVzdC88d2JyPmNvbnRy aWJ1dG9yL3F1aWNrc3RhcnQtYWlvLjx3YnI+aHRtbDwvYT48YnI+DQomZ3Q7IGJ1dCBnaXQgY2xv bmUgdGhlIGxhdGVzdCBzdGF0ZSBvZiB0aGUgb3BlbnN0YWNrLWFuc2libGUgcmVwby48YnI+DQom Z3Q7IFRoZSBhYm92ZSBwYWdlIGhhcyBhIGxpbmsgdGhhdCBjYW4gYmUgdXNlZCB0byBzdWJtaXQg YnVncyBkaXJlY3RseSB0byB0aGU8YnI+DQomZ3Q7IG9wZW5zdGFjay1hbnNpYmxlIHByb2plY3Qg YXQgbGF1bmNocGFkLjxicj4NCiZndDsgSW4gdGhpcyB3YXkgeW91IG1heSBiZSBhYmxlIHRvIGNs ZWFudXAvaW1wcm92ZSB0aGUgZG9jdW1lbnRhdGlvbiw8YnI+DQomZ3Q7IGFuZCBzaW5jZSB5b3Vy IHNldHVwIGlzIHRoZSBzaW1wbGVzdCBwb3NzaWJsZSBvbmUgeW91ciBidWcgcmVwb3J0cyBtYXkg Z2V0PGJyPg0KJmd0OyBub3RpY2VkIGFuZCByZXByb2R1Y2VkIGJ5IHRoZSBkZXZlbG9wZXJzLjxi cj4NCiZndDsgV2hhdCBoYXBwZW5zIGlzIHRoYXQgbW9zdCBwZW9wbGUgdHJ5IG9wZW5zdGFjay1h bnNpYmxlLCBkb24ndCByZXBvcnQgYnVncyw8YnI+DQomZ3Q7IG9yIHJlcG9ydCB0aGUgYnVncyB3 aXRob3V0IHRoZSBpbmZvcm1hdGlvbiBuZWNjZXNhcnk8YnI+DQomZ3Q7IHRvIHJlcHJvZHVjZSB0 aGVtLCBhbmQgYWJhbmRvbiB0aGUgd2hvbGUgaWRlYS48YnI+DQomZ3Q7PGJyPg0KJmd0OyBUcnkg dG8gc2VhcmNoPGJyPg0KJmd0OyA8YSBocmVmPSJodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9v cGVuc3RhY2stYW5zaWJsZS8rYnVncz9maWVsZC5zZWFyY2h0ZXh0PWdhbGVyYSIgcmVsPSJub3Jl ZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly9idWdzLmxhdW5jaHBhZC5uZXQvPHdicj5v cGVuc3RhY2stYW5zaWJsZS8rYnVncz9maWVsZC48d2JyPnNlYXJjaHRleHQ9Z2FsZXJhPC9hPjxi cj4NCiZndDsgZm9yIGluc3BpcmF0aW9uIGFib3V0IHdoYXQgdG8gZG8uPGJyPg0KJmd0OyBDdXJy ZW50bHkgdGhlIGdhbGVyYSBzZXR1cCBpbiBvcGVuc3RhY2stYW5zaWJsZSwgZXNwZWNpYWxseSBv biBjZW50b3M3IHNlZW1zPGJyPg0KJmd0OyB0byBiZSB1bmRlcmdvaW5nIHNvbWUgY3JpdGljYWwg Y2hhbmdlcy48YnI+DQomZ3Q7IEVudGVyIHRoZSBnYWxlcmEgY29udGFpbmVyOjxicj4NCiZndDsg bHhjLWF0dGFjaCAtbiBhaW8xX2dhbGVyYV9jb250YWluZXItNGY0ODhmNmE8YnI+DQomZ3Q7IGxv b2sgYXJvdW5kIGl0LCBjaGVjayB3aGV0aGVyIG15c3FsZCBpcyBydW5uaW5nIGV0Yy4sIHRyeSB0 byBpZGVudGlmeSB3aGljaDxicj4NCiZndDsgYW5zaWJsZSB0YXNrcyBmYWlsZWQgYW5kIHJ1biB0 aGVtIG1hbnVhbGx5IGluc2lkZSBvZiB0aGUgY29udGFpbmVyLjxicj4NCiZndDs8YnI+DQomZ3Q7 IE1hcmNpbjxicj4NCiZndDs8YnI+DQomZ3Q7PGJyPg0KJmd0OyBPbiBTdW4sIEZlYiA0LCAyMDE4 IGF0IDM6NDEgQU0sIFNhdGlzaCBQYXRlbCAmbHQ7PGEgaHJlZj0ibWFpbHRvOnNhdGlzaC50eHRA Z21haWwuY29tIj5zYXRpc2gudHh0QGdtYWlsLmNvbTwvYT4mZ3Q7IHdyb3RlOjxicj4NCiZndDsm Z3Q7PGJyPg0KJmd0OyZndDsgSSBoYXZlIG5vdGljZWQgaW4gb3V0cHV0ICJhaW8xX2dhbGVyYV9j b250YWluZXIiIGlzIGZhaWxlZCwgaG93IGRvIGk8YnI+DQomZ3Q7Jmd0OyBmaXhlZCB0aGlzIGtp bmQgb2YgaXNzdWU/PGJyPg0KJmd0OyZndDs8YnI+DQomZ3Q7Jmd0Ozxicj4NCiZndDsmZ3Q7PGJy Pg0KJmd0OyZndDsgUExBWSBSRUNBUDxicj4NCiZndDsmZ3Q7ICoqKioqKioqKioqKioqKioqKioq KioqKioqKioqKjx3YnI+KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqPHdicj4qKioqKioq KioqKioqKioqKioqKioqKioqKioqKio8d2JyPioqKioqKioqKioqKioqKioqKioqKioqKioqKioq Kjx3YnI+KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqPHdicj4qKioqKioqKioqKioqKioq KioqKjxicj4NCiZndDsmZ3Q7IGFpbzEmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7 ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOzogb2s9NDEm bmJzcDsgJm5ic3A7Y2hhbmdlZD00Jm5ic3A7ICZuYnNwOyB1bnJlYWNoYWJsZT0wPGJyPg0KJmd0 OyZndDsgZmFpbGVkPTA8YnI+DQomZ3Q7Jmd0OyBhaW8xX2NpbmRlcl9hcGlfY29udGFpbmVyLTx3 YnI+MmFmNGRkMDEgOiBvaz0wJm5ic3A7ICZuYnNwOyBjaGFuZ2VkPTA8YnI+DQomZ3Q7Jmd0OyB1 bnJlYWNoYWJsZT0wJm5ic3A7ICZuYnNwOyBmYWlsZWQ9MDxicj4NCiZndDsmZ3Q7IGFpbzFfY2lu ZGVyX3NjaGVkdWxlcl88d2JyPmNvbnRhaW5lci00NTRkYjFmYiA6IG9rPTAmbmJzcDsgJm5ic3A7 IGNoYW5nZWQ9MDxicj4NCiZndDsmZ3Q7IHVucmVhY2hhYmxlPTAmbmJzcDsgJm5ic3A7IGZhaWxl ZD0wPGJyPg0KJmd0OyZndDsgYWlvMV9kZXNpZ25hdGVfY29udGFpbmVyLTx3YnI+ZjdlYTNmNzMg OiBvaz0wJm5ic3A7ICZuYnNwOyBjaGFuZ2VkPTAmbmJzcDsgJm5ic3A7IHVucmVhY2hhYmxlPTA8 YnI+DQomZ3Q7Jmd0OyZuYnNwOyAmbmJzcDsgZmFpbGVkPTA8YnI+DQomZ3Q7Jmd0OyBhaW8xX2dh bGVyYV9jb250YWluZXItNGY0ODhmNmEgOiBvaz0zMiZuYnNwOyAmbmJzcDtjaGFuZ2VkPTMmbmJz cDsgJm5ic3A7IHVucmVhY2hhYmxlPTA8YnI+DQomZ3Q7Jmd0OyBmYWlsZWQ9MTxicj4NCiZndDsm Z3Q7PGJyPg0KJmd0OyZndDsgT24gU2F0LCBGZWIgMywgMjAxOCBhdCA5OjI2IFBNLCBTYXRpc2gg UGF0ZWwgJmx0OzxhIGhyZWY9Im1haWx0bzpzYXRpc2gudHh0QGdtYWlsLmNvbSI+c2F0aXNoLnR4 dEBnbWFpbC5jb208L2E+Jmd0OyB3cm90ZTo8YnI+DQomZ3Q7Jmd0OyAmZ3Q7IEkgaGF2ZSByZS1p bnN0YWxsIGNlbnRvczcgYW5kIGdpdmUgaXQgYSB0cnkgYW5kIGdvdCB0aGlzIGVycm9yPGJyPg0K Jmd0OyZndDsgJmd0Ozxicj4NCiZndDsmZ3Q7ICZndDsgREVCVUcgTUVTU0FHRSBSRUNBUDxicj4N CiZndDsmZ3Q7ICZndDsgKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqPHdicj4qKioqKioq KioqKioqKioqKioqKioqKioqKioqKio8YnI+DQomZ3Q7Jmd0OyAmZ3Q7IERFQlVHOiBbTG9hZCBs b2NhbCBwYWNrYWdlc108YnI+DQomZ3Q7Jmd0OyAmZ3Q7ICoqKioqKioqKioqKioqKioqKioqKioq KioqKioqKjx3YnI+KioqKioqKioqKioqKioqKioqKioqPGJyPg0KJmd0OyZndDsgJmd0OyBBbGwg aXRlbXMgY29tcGxldGVkPGJyPg0KJmd0OyZndDsgJmd0Ozxicj4NCiZndDsmZ3Q7ICZndDsgU2F0 dXJkYXkgMDMgRmVicnVhcnkgMjAxOCZuYnNwOyAyMTowNDowNyAtMDUwMCAoMDowMDowNC4xNzUp PGJyPg0KJmd0OyZndDsgJmd0OyAwOjE2OjE3LjIwNCAqKioqKjxicj4NCiZndDsmZ3Q7ICZndDs8 YnI+DQomZ3Q7Jmd0OyAmZ3Q7ID09PT09PT09PT09PT09PT09PT09PT09PT09PT09PTx3YnI+PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PHdicj49PT09PT09PT09PT09PT09PT09PGJyPg0K Jmd0OyZndDsgJmd0OyByZXBvX2J1aWxkIDogQ3JlYXRlIE9wZW5TdGFjay1BbnNpYmxlIHJlcXVp cmVtZW50IHdoZWVscyAtLS0tLS0tLS0tLS0tLTxicj4NCiZndDsmZ3Q7ICZndDsgMjY4LjE2czxi cj4NCiZndDsmZ3Q7ICZndDsgcmVwb19idWlsZCA6IFdhaXQgZm9yIHRoZSB2ZW52cyBidWlsZHMg dG8gY29tcGxldGUgLS0tLS0tLS0tLS0tLS0tLS0tLS08YnI+DQomZ3Q7Jmd0OyAmZ3Q7IDExMC4z MHM8YnI+DQomZ3Q7Jmd0OyAmZ3Q7IHJlcG9fYnVpbGQgOiBJbnN0YWxsIHBhY2thZ2VzIC0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLTx3YnI+LS0tLS0tLS0tLS0tPGJyPg0KJmd0OyZndDsg Jmd0OyA2OC4yNnM8YnI+DQomZ3Q7Jmd0OyAmZ3Q7IHJlcG9fYnVpbGQgOiBDbG9uZSBnaXQgcmVw b3NpdG9yaWVzIGFzeW5jaHJvbm91c2x5IC0tLS0tLS0tLS0tLS0tLS0tLS0tLTxicj4NCiZndDsm Z3Q7ICZndDsgNTkuODVzPGJyPg0KJmd0OyZndDsgJmd0OyBwaXBfaW5zdGFsbCA6IEluc3RhbGwg ZGlzdHJvIHBhY2thZ2VzIC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLTx3YnI+LS0tLTxi cj4NCiZndDsmZ3Q7ICZndDsgMzYuNzJzPGJyPg0KJmd0OyZndDsgJmd0OyBnYWxlcmFfY2xpZW50 IDogSW5zdGFsbCBnYWxlcmEgZGlzdHJvIHBhY2thZ2VzIC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS08YnI+DQomZ3Q7Jmd0OyAmZ3Q7IDMzLjIxczxicj4NCiZndDsmZ3Q7ICZndDsgaGFwcm94eV9z ZXJ2ZXIgOiBDcmVhdGUgaGFwcm94eSBzZXJ2aWNlIGNvbmZpZyBmaWxlcyAtLS0tLS0tLS0tLS0t LS0tLS0tPGJyPg0KJmd0OyZndDsgJmd0OyAzMC44MXM8YnI+DQomZ3Q7Jmd0OyAmZ3Q7IHJlcG9f YnVpbGQgOiBFeGVjdXRlIHRoZSB2ZW52IGJ1aWxkIHNjcmlwdHMgYXN5bmNob25vdXNseSAtLS0t LS0tLS0tLS0tLTxicj4NCiZndDsmZ3Q7ICZndDsgMjkuNjlzPGJyPg0KJmd0OyZndDsgJmd0OyBw aXBfaW5zdGFsbCA6IEluc3RhbGwgZGlzdHJvIHBhY2thZ2VzIC0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLTx3YnI+LS0tLTxicj4NCiZndDsmZ3Q7ICZndDsgMjMuNTZzPGJyPg0KJmd0OyZn dDsgJmd0OyByZXBvX3NlcnZlciA6IEluc3RhbGwgcmVwbyBzZXJ2ZXIgcGFja2FnZXMgLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS08YnI+DQomZ3Q7Jmd0OyAmZ3Q7IDIwLjExczxicj4NCiZn dDsmZ3Q7ICZndDsgbWVtY2FjaGVkX3NlcnZlciA6IEluc3RhbGwgZGlzdHJvIHBhY2thZ2VzIC0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tPGJyPg0KJmd0OyZndDsgJmd0OyAxNi4zNXM8YnI+ DQomZ3Q7Jmd0OyAmZ3Q7IHJlcG9fYnVpbGQgOiBDcmVhdGUgdmVudiBidWlsZCBvcHRpb25zIGZp bGVzIC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLTxicj4NCiZndDsmZ3Q7ICZndDsgMTQuNTdz PGJyPg0KJmd0OyZndDsgJmd0OyBoYXByb3h5X3NlcnZlciA6IEluc3RhbGwgSEFQcm94eSBQYWNr YWdlczxicj4NCiZndDsmZ3Q7ICZndDsgLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tPHdi cj4tIDguMzVzPGJyPg0KJmd0OyZndDsgJmd0OyByc3lzbG9nX2NsaWVudCA6IEluc3RhbGwgcnN5 c2xvZyBwYWNrYWdlczxicj4NCiZndDsmZ3Q7ICZndDsgLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tPHdicj4tIDguMzNzPGJyPg0KJmd0OyZndDsgJmd0OyByc3lzbG9nX2NsaWVudCA6IElu c3RhbGwgcnN5c2xvZyBwYWNrYWdlczxicj4NCiZndDsmZ3Q7ICZndDsgLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tPHdicj4tIDcuNjRzPGJyPg0KJmd0OyZndDsgJmd0OyByc3lzbG9nX2Ns aWVudCA6IEluc3RhbGwgcnN5c2xvZyBwYWNrYWdlczxicj4NCiZndDsmZ3Q7ICZndDsgLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tPHdicj4tIDcuNDJzPGJyPg0KJmd0OyZndDsgJmd0OyBy ZXBvX2J1aWxkIDogV2FpdCBmb3IgZ2l0IGNsb25lcyB0byBjb21wbGV0ZTxicj4NCiZndDsmZ3Q7 ICZndDsgLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLSA3LjI1czxicj4NCiZndDsmZ3Q7ICZn dDsgcmVwb19zZXJ2ZXIgOiBJbnN0YWxsIHJlcG8gY2FjaGluZyBzZXJ2ZXIgcGFja2FnZXM8YnI+ DQomZ3Q7Jmd0OyAmZ3Q7IC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0gNC43NnM8YnI+DQomZ3Q7Jmd0 OyAmZ3Q7IGdhbGVyYV9zZXJ2ZXIgOiBDaGVjayB0aGF0IFdTUkVQIGlzIHJlYWR5PGJyPg0KJmd0 OyZndDsgJmd0OyAtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS08d2JyPi0gNC4xOHM8YnI+ DQomZ3Q7Jmd0OyAmZ3Q7IHJlcG9fc2VydmVyIDogR2l0IHNlcnZpY2UgZGF0YSBmb2xkZXIgc2V0 dXA8YnI+DQomZ3Q7Jmd0OyAmZ3Q7IC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tIDQuMDRz PGJyPg0KJmd0OyZndDsgJmd0OyArKyBleGl0X2ZhaWwgMzQxIDA8YnI+DQomZ3Q7Jmd0OyAmZ3Q7 ICsrIHNldCAreDxicj4NCiZndDsmZ3Q7ICZndDsgKysgaW5mb19ibG9jayAnRXJyb3IgSW5mbyAt IDM0MScgMDxicj4NCiZndDsmZ3Q7ICZndDsgKysgZWNobzxicj4NCiZndDsmZ3Q7ICZndDsgLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tPHdicj4tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS08d2JyPi0tLS0tLS0tLS08YnI+DQomZ3Q7Jmd0OyAmZ3Q7IC0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLTx3YnI+LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tPHdicj4tLS0t LS0tLS0tPGJyPg0KJmd0OyZndDsgJmd0OyArKyBwcmludF9pbmZvICdFcnJvciBJbmZvIC0gMzQx JyAwPGJyPg0KJmd0OyZndDsgJmd0OyArKyBQUk9DX05BTUU9Jy0gWyBFcnJvciBJbmZvIC0gMzQx IDAgXSAtJzxicj4NCiZndDsmZ3Q7ICZndDsgKysgcHJpbnRmICdcbiVzJXNcbicgJy0gWyBFcnJv ciBJbmZvIC0gMzQxIDAgXSAtJzxicj4NCiZndDsmZ3Q7ICZndDsgLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tPHdicj4tLS0tLS0tLS0tLS0tLTxicj4NCiZndDsmZ3Q7ICZndDs8YnI+DQom Z3Q7Jmd0OyAmZ3Q7IC0gWyBFcnJvciBJbmZvIC0gMzQxIDAgXSAtLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS08d2JyPi0tLS0tLS0tLS0tLS0tLTxicj4NCiZndDsmZ3Q7ICZndDsgKysgZWNo bzxicj4NCiZndDsmZ3Q7ICZndDsgLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tPHdicj4t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS08d2JyPi0tLS0tLS0tLS08YnI+DQomZ3Q7Jmd0 OyAmZ3Q7IC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLTx3YnI+LS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tPHdicj4tLS0tLS0tLS0tPGJyPg0KJmd0OyZndDsgJmd0OyArKyBleGl0 X3N0YXRlIDE8YnI+DQomZ3Q7Jmd0OyAmZ3Q7ICsrIHNldCAreDxicj4NCiZndDsmZ3Q7ICZndDsg LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tPHdicj4tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS08d2JyPi0tLS0tLS0tLS08YnI+DQomZ3Q7Jmd0OyAmZ3Q7PGJyPg0KJmd0OyZndDsg Jmd0OyAtIFsgUnVuIFRpbWUgPSAyMDMwIHNlY29uZHMgfHwgMzMgbWludXRlcyBdIC0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tPGJyPg0KJmd0OyZndDsgJmd0OyAtLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS08d2JyPi0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLTx3YnI+LS0tLS0t LS0tLTxicj4NCiZndDsmZ3Q7ICZndDsgLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tPHdi cj4tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS08d2JyPi0tLS0tLS0tLS08YnI+DQomZ3Q7 Jmd0OyAmZ3Q7PGJyPg0KJmd0OyZndDsgJmd0OyAtIFsgU3RhdHVzOiBGYWlsdXJlIF0gLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tPHdicj4tLS0tLS0tLS0tLS0tLS0tLS08YnI+DQomZ3Q7 Jmd0OyAmZ3Q7IC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLTx3YnI+LS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tPHdicj4tLS0tLS0tLS0tPGJyPg0KJmd0OyZndDsgJmd0Ozxicj4N CiZndDsmZ3Q7ICZndDs8YnI+DQomZ3Q7Jmd0OyAmZ3Q7PGJyPg0KJmd0OyZndDsgJmd0Ozxicj4N CiZndDsmZ3Q7ICZndDsgSSBkb24ndCBrbm93IHdoeSBpdCBmYWlsZWQ8YnI+DQomZ3Q7Jmd0OyAm Z3Q7PGJyPg0KJmd0OyZndDsgJmd0OyBidXQgaSB0cmllZCBmb2xsb3dpbmc6PGJyPg0KJmd0OyZn dDsgJmd0Ozxicj4NCiZndDsmZ3Q7ICZndDsgW3Jvb3RAYWlvIH5dIyBseGMtbHMgLWY8YnI+DQom Z3Q7Jmd0OyAmZ3Q7IE5BTUUmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNw OyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7 ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7IFNUQVRFJm5ic3A7ICZu YnNwO0FVVE9TVEFSVCBHUk9VUFM8YnI+DQomZ3Q7Jmd0OyAmZ3Q7Jm5ic3A7ICZuYnNwOyAmbmJz cDsgJm5ic3A7ICZuYnNwOyBJUFY0Jm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAm bmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZu YnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5i c3A7SVBWNjxicj4NCiZndDsmZ3Q7ICZndDsgYWlvMV9jaW5kZXJfYXBpX2NvbnRhaW5lci08d2Jy PjJhZjRkZDAxJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyBSVU5OSU5HIDEmbmJz cDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7b25ib290LDxicj4NCiZndDsmZ3Q7ICZndDsg b3BlbnN0YWNrIDEwLjI1NS4yNTUuNjIsIDE3Mi4yOS4yMzguMjEwLCAxNzIuMjkuMjQ0LjE1MiZu YnNwOyAtPGJyPg0KJmd0OyZndDsgJmd0OyBhaW8xX2NpbmRlcl9zY2hlZHVsZXJfPHdicj5jb250 YWluZXItNDU0ZGIxZmImbmJzcDsgJm5ic3A7IFJVTk5JTkcgMSZuYnNwOyAmbmJzcDsgJm5ic3A7 ICZuYnNwOyAmbmJzcDtvbmJvb3QsPGJyPg0KJmd0OyZndDsgJmd0OyBvcGVuc3RhY2sgMTAuMjU1 LjI1NS4xMTcsIDE3Mi4yOS4yMzkuMTcyJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNw OyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDstPGJyPg0KJmd0OyZndDsgJmd0OyBhaW8xX2Rl c2lnbmF0ZV9jb250YWluZXItPHdicj5mN2VhM2Y3MyZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNw OyAmbmJzcDsgJm5ic3A7UlVOTklORyAxJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNw O29uYm9vdCw8YnI+DQomZ3Q7Jmd0OyAmZ3Q7IG9wZW5zdGFjayAxMC4yNTUuMjU1LjIzNSwgMTcy LjI5LjIzOS4xNjYmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJz cDsgJm5ic3A7ICZuYnNwOy08YnI+DQomZ3Q7Jmd0OyAmZ3Q7IGFpbzFfZ2FsZXJhX2NvbnRhaW5l ci00ZjQ4OGY2YSZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNw OyBSVU5OSU5HIDEmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7b25ib290LDxicj4N CiZndDsmZ3Q7ICZndDsgb3BlbnN0YWNrIDEwLjI1NS4yNTUuMTkzLCAxNzIuMjkuMjM2LjY5Jm5i c3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJz cDsgLTxicj4NCiZndDsmZ3Q7ICZndDsgYWlvMV9nbGFuY2VfY29udGFpbmVyLWY4Y2FhOWU2Jm5i c3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7IFJVTk5JTkcgMSZu YnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDtvbmJvb3QsPGJyPg0KJmd0OyZndDsgJmd0 OyBvcGVuc3RhY2sgMTAuMjU1LjI1NS4yMjUsIDE3Mi4yOS4yMzkuNTIsIDE3Mi4yOS4yNDYuMjUm bmJzcDsgJm5ic3A7LTxicj4NCiZndDsmZ3Q7ICZndDsgYWlvMV9oZWF0X2FwaV9jb250YWluZXIt PHdicj44MzIxYTc2MyZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7IFJV Tk5JTkcgMSZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDtvbmJvb3QsPGJyPg0KJmd0 OyZndDsgJmd0OyBvcGVuc3RhY2sgMTAuMjU1LjI1NS4xMDQsIDE3Mi4yOS4yMzYuMTg2Jm5ic3A7 ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDst PGJyPg0KJmd0OyZndDsgJmd0OyBhaW8xX2hlYXRfYXBpc19jb250YWluZXItPHdicj4zZjcwYWQ3 NCZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7UlVOTklORyAxJm5ic3A7 ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwO29uYm9vdCw8YnI+DQomZ3Q7Jmd0OyAmZ3Q7IG9w ZW5zdGFjayAxMC4yNTUuMjU1LjE2NiwgMTcyLjI5LjIzOS4xMyZuYnNwOyAmbmJzcDsgJm5ic3A7 ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7IC08YnI+DQomZ3Q7Jmd0 OyAmZ3Q7IGFpbzFfaGVhdF9lbmdpbmVfY29udGFpbmVyLTx3YnI+YTE4ZTVhMGEmbmJzcDsgJm5i c3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7UlVOTklORyAxJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5i c3A7ICZuYnNwO29uYm9vdCw8YnI+DQomZ3Q7Jmd0OyAmZ3Q7IG9wZW5zdGFjayAxMC4yNTUuMjU1 LjExOCwgMTcyLjI5LjIzOC43Jm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJz cDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7LTxicj4NCiZndDsmZ3Q7ICZndDsgYWlvMV9o b3Jpem9uX2NvbnRhaW5lci08d2JyPmU0OTMyNzVjJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7 ICZuYnNwOyAmbmJzcDsgJm5ic3A7UlVOTklORyAxJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7 ICZuYnNwO29uYm9vdCw8YnI+DQomZ3Q7Jmd0OyAmZ3Q7IG9wZW5zdGFjayAxMC4yNTUuMjU1Ljk4 LCAxNzIuMjkuMjM3LjQzJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsg Jm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7LTxicj4NCiZndDsmZ3Q7ICZndDsgYWlvMV9rZXlz dG9uZV9jb250YWluZXItPHdicj5jMGUyM2UxNCZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAm bmJzcDsgJm5ic3A7IFJVTk5JTkcgMSZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDtv bmJvb3QsPGJyPg0KJmd0OyZndDsgJmd0OyBvcGVuc3RhY2sgMTAuMjU1LjI1NS42MCwgMTcyLjI5 LjIzNy4xNjUmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsg Jm5ic3A7ICZuYnNwOyAtPGJyPg0KJmd0OyZndDsgJmd0OyBhaW8xX21lbWNhY2hlZF9jb250YWlu ZXItPHdicj5lZjhmZWQ0YyZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7 UlVOTklORyAxJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwO29uYm9vdCw8YnI+DQom Z3Q7Jmd0OyAmZ3Q7IG9wZW5zdGFjayAxMC4yNTUuMjU1LjIxNCwgMTcyLjI5LjIzOC4yMTEmbmJz cDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNw Oy08YnI+DQomZ3Q7Jmd0OyAmZ3Q7IGFpbzFfbmV1dHJvbl9hZ2VudHNfY29udGFpbmVyLTx3YnI+ MTMxZTk5NmUmbmJzcDsgJm5ic3A7ICZuYnNwOyBSVU5OSU5HIDEmbmJzcDsgJm5ic3A7ICZuYnNw OyAmbmJzcDsgJm5ic3A7b25ib290LDxicj4NCiZndDsmZ3Q7ICZndDsgb3BlbnN0YWNrIDEwLjI1 NS4yNTUuMTUzLCAxNzIuMjkuMjM3LjI0NiwgMTcyLjI5LjI0My4yMjcgLTxicj4NCiZndDsmZ3Q7 ICZndDsgYWlvMV9uZXV0cm9uX3NlcnZlcl9jb250YWluZXItPHdicj5jY2Q2OTM5NCZuYnNwOyAm bmJzcDsgJm5ic3A7IFJVTk5JTkcgMSZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDtv bmJvb3QsPGJyPg0KJmd0OyZndDsgJmd0OyBvcGVuc3RhY2sgMTAuMjU1LjI1NS4yNywgMTcyLjI5 LjIzNi4xMjkmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsg Jm5ic3A7ICZuYnNwOyAtPGJyPg0KJmd0OyZndDsgJmd0OyBhaW8xX25vdmFfYXBpX2NvbnRhaW5l ci08d2JyPjczMjc0MDI0Jm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsg UlVOTklORyAxJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwO29uYm9vdCw8YnI+DQom Z3Q7Jmd0OyAmZ3Q7IG9wZW5zdGFjayAxMC4yNTUuMjU1LjQyLCAxNzIuMjkuMjM4LjIwMSZuYnNw OyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7 IC08YnI+DQomZ3Q7Jmd0OyAmZ3Q7IGFpbzFfbm92YV9hcGlfbWV0YWRhdGFfPHdicj5jb250YWlu ZXItYTFkMzIyODImbmJzcDsgJm5ic3A7UlVOTklORyAxJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5i c3A7ICZuYnNwO29uYm9vdCw8YnI+DQomZ3Q7Jmd0OyAmZ3Q7IG9wZW5zdGFjayAxMC4yNTUuMjU1 LjIxOCwgMTcyLjI5LjIzOC4xNTMmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZu YnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOy08YnI+DQomZ3Q7Jmd0OyAmZ3Q7IGFpbzFfbm92YV9h cGlfb3NfY29tcHV0ZV88d2JyPmNvbnRhaW5lci01MjcyNTk0MCBSVU5OSU5HIDEmbmJzcDsgJm5i c3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7b25ib290LDxicj4NCiZndDsmZ3Q7ICZndDsgb3BlbnN0 YWNrIDEwLjI1NS4yNTUuMTA5LCAxNzIuMjkuMjM2LjEyNiZuYnNwOyAmbmJzcDsgJm5ic3A7ICZu YnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7LTxicj4NCiZndDsmZ3Q7ICZn dDsgYWlvMV9ub3ZhX2FwaV9wbGFjZW1lbnRfPHdicj5jb250YWluZXItMDU4ZTgwMzEmbmJzcDsg UlVOTklORyAxJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwO29uYm9vdCw8YnI+DQom Z3Q7Jmd0OyAmZ3Q7IG9wZW5zdGFjayAxMC4yNTUuMjU1LjI5LCAxNzIuMjkuMjM2LjE1NyZuYnNw OyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7 IC08YnI+DQomZ3Q7Jmd0OyAmZ3Q7IGFpbzFfbm92YV9jb25kdWN0b3JfY29udGFpbmVyLTx3YnI+ OWI2YjIwOGMmbmJzcDsgJm5ic3A7ICZuYnNwOyBSVU5OSU5HIDEmbmJzcDsgJm5ic3A7ICZuYnNw OyAmbmJzcDsgJm5ic3A7b25ib290LDxicj4NCiZndDsmZ3Q7ICZndDsgb3BlbnN0YWNrIDEwLjI1 NS4yNTUuMTgsIDE3Mi4yOS4yMzkuOSZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsg Jm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAtPGJyPg0KJmd0OyZndDsgJmd0OyBh aW8xX25vdmFfY29uc29sZV9jb250YWluZXItPHdicj4wZmI4OTk1YyZuYnNwOyAmbmJzcDsgJm5i c3A7ICZuYnNwOyBSVU5OSU5HIDEmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7b25i b290LDxicj4NCiZndDsmZ3Q7ICZndDsgb3BlbnN0YWNrIDEwLjI1NS4yNTUuNDcsIDE3Mi4yOS4y MzcuMTI5Jm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZu YnNwOyAmbmJzcDsgLTxicj4NCiZndDsmZ3Q7ICZndDsgYWlvMV9ub3ZhX3NjaGVkdWxlcl9jb250 YWluZXItPHdicj44ZjdhNjU3YSZuYnNwOyAmbmJzcDsgJm5ic3A7IFJVTk5JTkcgMSZuYnNwOyAm bmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDtvbmJvb3QsPGJyPg0KJmd0OyZndDsgJmd0OyBvcGVu c3RhY2sgMTAuMjU1LjI1NS4xOTUsIDE3Mi4yOS4yMzguMTEzJm5ic3A7ICZuYnNwOyAmbmJzcDsg Jm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDstPGJyPg0KJmd0OyZndDsg Jmd0OyBhaW8xX3JhYmJpdF9tcV9jb250YWluZXItPHdicj5jMzQ1MGQ2NiZuYnNwOyAmbmJzcDsg Jm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7UlVOTklORyAxJm5ic3A7ICZuYnNwOyAmbmJzcDsg Jm5ic3A7ICZuYnNwO29uYm9vdCw8YnI+DQomZ3Q7Jmd0OyAmZ3Q7IG9wZW5zdGFjayAxMC4yNTUu MjU1LjExMSwgMTcyLjI5LjIzNy4yMDImbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7 ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOy08YnI+DQomZ3Q7Jmd0OyAmZ3Q7IGFpbzFfcmVw b19jb250YWluZXItOGUwN2ZkZWYmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZu YnNwOyAmbmJzcDsgJm5ic3A7IFJVTk5JTkcgMSZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAm bmJzcDtvbmJvb3QsPGJyPg0KJmd0OyZndDsgJmd0OyBvcGVuc3RhY2sgMTAuMjU1LjI1NS4xNDEs IDE3Mi4yOS4yMzkuNzkmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAm bmJzcDsgJm5ic3A7ICZuYnNwOyAtPGJyPg0KJmd0OyZndDsgJmd0OyBhaW8xX3JzeXNsb2dfY29u dGFpbmVyLTx3YnI+YjE5OGZiZTUmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZu YnNwOyAmbmJzcDtSVU5OSU5HIDEmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7b25i b290LDxicj4NCiZndDsmZ3Q7ICZndDsgb3BlbnN0YWNrIDEwLjI1NS4yNTUuMTMsIDE3Mi4yOS4y MzYuMTk1Jm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZu YnNwOyAmbmJzcDsgLTxicj4NCiZndDsmZ3Q7ICZndDsgYWlvMV9zd2lmdF9wcm94eV9jb250YWlu ZXItPHdicj4xYTM1MzZlMSZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDtSVU5OSU5H IDEmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7b25ib290LDxicj4NCiZndDsmZ3Q7 ICZndDsgb3BlbnN0YWNrIDEwLjI1NS4yNTUuMTA4LCAxNzIuMjkuMjM3LjMxLCAxNzIuMjkuMjQ0 LjI0OCZuYnNwOyAtPGJyPg0KJmd0OyZndDsgJmd0OyBhaW8xX3V0aWxpdHlfY29udGFpbmVyLTx3 YnI+YmQxMDZmMTEmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJz cDtSVU5OSU5HIDEmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7b25ib290LDxicj4N CiZndDsmZ3Q7ICZndDsgb3BlbnN0YWNrIDEwLjI1NS4yNTUuNTQsIDE3Mi4yOS4yMzkuMTI0Jm5i c3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJz cDsgLTxicj4NCiZndDsmZ3Q7ICZndDsgW3Jvb3RAYWlvIH5dIyBseGMtYTxicj4NCiZndDsmZ3Q7 ICZndDsgbHhjLWF0dGFjaCZuYnNwOyAmbmJzcDsgJm5ic3A7bHhjLWF1dG9zdGFydDxicj4NCiZn dDsmZ3Q7ICZndDsgW3Jvb3RAYWlvIH5dIyBseGMtYXR0YWNoIC1uIGFpbzFfdXRpbGl0eV9jb250 YWluZXItPHdicj5iZDEwNmYxMTxicj4NCiZndDsmZ3Q7ICZndDsgW3Jvb3RAYWlvMS11dGlsaXR5 LWNvbnRhaW5lci08d2JyPmJkMTA2ZjExIH5dIzxicj4NCiZndDsmZ3Q7ICZndDsgW3Jvb3RAYWlv MS11dGlsaXR5LWNvbnRhaW5lci08d2JyPmJkMTA2ZjExIH5dIyBzb3VyY2UgL3Jvb3Qvb3BlbnJj PGJyPg0KJmd0OyZndDsgJmd0OyBbcm9vdEBhaW8xLXV0aWxpdHktY29udGFpbmVyLTx3YnI+YmQx MDZmMTEgfl0jIG9wZW5zdGFjazxicj4NCiZndDsmZ3Q7ICZndDsgb3BlbnN0YWNrJm5ic3A7ICZu YnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5i c3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7b3BlbnN0YWNrLWhvc3QtaG9zdGZpbGUtc2V0dXAuPHdi cj5zaDxicj4NCiZndDsmZ3Q7ICZndDsgW3Jvb3RAYWlvMS11dGlsaXR5LWNvbnRhaW5lci08d2Jy PmJkMTA2ZjExIH5dIyBvcGVuc3RhY2s8YnI+DQomZ3Q7Jmd0OyAmZ3Q7IG9wZW5zdGFjayZuYnNw OyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7 ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwO29wZW5zdGFjay1ob3N0LWhvc3RmaWxlLXNldHVw Ljx3YnI+c2g8YnI+DQomZ3Q7Jmd0OyAmZ3Q7IFtyb290QGFpbzEtdXRpbGl0eS1jb250YWluZXIt PHdicj5iZDEwNmYxMSB+XSMgb3BlbnN0YWNrIHVzZXIgbGlzdDxicj4NCiZndDsmZ3Q7ICZndDsg RmFpbGVkIHRvIGRpc2NvdmVyIGF2YWlsYWJsZSBpZGVudGl0eSB2ZXJzaW9ucyB3aGVuIGNvbnRh Y3Rpbmc8YnI+DQomZ3Q7Jmd0OyAmZ3Q7IDxhIGhyZWY9Imh0dHA6Ly8xNzIuMjkuMjM2LjEwMDo1 MDAwL3YzIiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwOi8vMTcyLjI5LjIz Ni4xMDA6NTAwMC92MzwvYT4uIEF0dGVtcHRpbmcgdG8gcGFyc2UgdmVyc2lvbiBmcm9tIFVSTC48 YnI+DQomZ3Q7Jmd0OyAmZ3Q7IFNlcnZpY2UgVW5hdmFpbGFibGUgKEhUVFAgNTAzKTxicj4NCiZn dDsmZ3Q7ICZndDsgW3Jvb3RAYWlvMS11dGlsaXR5LWNvbnRhaW5lci08d2JyPmJkMTA2ZjExIH5d Izxicj4NCiZndDsmZ3Q7ICZndDs8YnI+DQomZ3Q7Jmd0OyAmZ3Q7PGJyPg0KJmd0OyZndDsgJmd0 OyBub3Qgc3VyZSB3aGF0IGlzIHRoaXMgZXJyb3IgPzxicj4NCiZndDsmZ3Q7ICZndDs8YnI+DQom Z3Q7Jmd0OyAmZ3Q7PGJyPg0KJmd0OyZndDsgJmd0OyBPbiBTYXQsIEZlYiAzLCAyMDE4IGF0IDc6 MjkgUE0sIFNhdGlzaCBQYXRlbCAmbHQ7PGEgaHJlZj0ibWFpbHRvOnNhdGlzaC50eHRAZ21haWwu Y29tIj5zYXRpc2gudHh0QGdtYWlsLmNvbTwvYT4mZ3Q7PGJyPg0KJmd0OyZndDsgJmd0OyB3cm90 ZTo8YnI+DQomZ3Q7Jmd0OyAmZ3Q7Jmd0OyBJIGhhdmUgdGlyZWQgZXZlcnl0aGluZyBidXQgZGlk bid0IGFibGUgdG8gZmluZCBzb2x1dGlvbiA6KCZuYnNwOyB3aGF0IGkgYW08YnI+DQomZ3Q7Jmd0 OyAmZ3Q7Jmd0OyBkb2luZyB3cm9uZyBoZXJlLCBpIGFtIGZvbGxvd2luZyB0aGlzIGluc3RydWN0 aW9uIGFuZCBwbGVhc2UgbGV0IG1lPGJyPg0KJmd0OyZndDsgJmd0OyZndDsga25vdyBpZiBpIGFt IHdyb25nPGJyPg0KJmd0OyZndDsgJmd0OyZndDs8YnI+DQomZ3Q7Jmd0OyAmZ3Q7Jmd0Ozxicj4N CiZndDsmZ3Q7ICZndDsmZ3Q7IDxhIGhyZWY9Imh0dHBzOi8vZGV2ZWxvcGVyLnJhY2tzcGFjZS5j b20vYmxvZy9saWZlLXdpdGhvdXQtZGV2c3RhY2stb3BlbnN0YWNrLWRldmVsb3BtZW50LXdpdGgt b3NhLyIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly9kZXZlbG9wZXIu cmFja3NwYWNlLjx3YnI+Y29tL2Jsb2cvbGlmZS13aXRob3V0LTx3YnI+ZGV2c3RhY2stb3BlbnN0 YWNrLTx3YnI+ZGV2ZWxvcG1lbnQtd2l0aC1vc2EvPC9hPjxicj4NCiZndDsmZ3Q7ICZndDsmZ3Q7 PGJyPg0KJmd0OyZndDsgJmd0OyZndDsgSSBoYXZlIENlbnRPUzcsIHdpdGggOCBDUFUgYW5kIDE2 R0IgbWVtb3J5IHdpdGggMTAwR0IgZGlzayBzaXplLjxicj4NCiZndDsmZ3Q7ICZndDsmZ3Q7PGJy Pg0KJmd0OyZndDsgJmd0OyZndDsgRXJyb3I6IDxhIGhyZWY9Imh0dHA6Ly9wYXN0ZS5vcGVuc3Rh Y2sub3JnL3Nob3cvNjYwNDk3LyIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0 cDovL3Bhc3RlLm9wZW5zdGFjay5vcmcvPHdicj5zaG93LzY2MDQ5Ny88L2E+PGJyPg0KJmd0OyZn dDsgJmd0OyZndDs8YnI+DQomZ3Q7Jmd0OyAmZ3Q7Jmd0Ozxicj4NCiZndDsmZ3Q7ICZndDsmZ3Q7 IEkgaGF2ZSB0aXJlZCBnYXRlLWNoZWNrLWNvbW1pdC5zaCBidXQgc2FtZSBlcnJvciA6KDxicj4N CiZndDsmZ3Q7ICZndDsmZ3Q7PGJyPg0KJmd0OyZndDsgJmd0OyZndDs8YnI+DQomZ3Q7Jmd0OyAm Z3Q7Jmd0Ozxicj4NCiZndDsmZ3Q7ICZndDsmZ3Q7IE9uIFNhdCwgRmViIDMsIDIwMTggYXQgMTox MSBBTSwgU2F0aXNoIFBhdGVsICZsdDs8YSBocmVmPSJtYWlsdG86c2F0aXNoLnR4dEBnbWFpbC5j b20iPnNhdGlzaC50eHRAZ21haWwuY29tPC9hPiZndDs8YnI+DQomZ3Q7Jmd0OyAmZ3Q7Jmd0OyB3 cm90ZTo8YnI+DQomZ3Q7Jmd0OyAmZ3Q7Jmd0OyZndDsgSSBoYXZlIHN0YXJ0ZWQgcGxheWluZyB3 aXRoIG9wZW5zdGFjay1hbnNpYmxlIG9uIENlbnRPUzcgYW5kIHRyeWluZyB0bzxicj4NCiZndDsm Z3Q7ICZndDsmZ3Q7Jmd0OyBpbnN0YWxsIEFsbC1pbi1vbmUgYnV0IGdvdCB0aGlzIGVycm9yIGFu ZCBub3Qgc3VyZSB3aGF0IGNhdXNlIHRoYXQ8YnI+DQomZ3Q7Jmd0OyAmZ3Q7Jmd0OyZndDsgZXJy b3IgaG93IGRvIGkgdHJvdWJsZXNob290IGl0Pzxicj4NCiZndDsmZ3Q7ICZndDsmZ3Q7Jmd0Ozxi cj4NCiZndDsmZ3Q7ICZndDsmZ3Q7Jmd0Ozxicj4NCiZndDsmZ3Q7ICZndDsmZ3Q7Jmd0OyBUQVNL IFtib290c3RyYXAtaG9zdCA6IFJlbW92ZSBhbiBleGlzdGluZyBwcml2YXRlL3B1YmxpYyBzc2gg a2V5cyBpZjxicj4NCiZndDsmZ3Q7ICZndDsmZ3Q7Jmd0OyBvbmUgaXMgbWlzc2luZ108YnI+DQom Z3Q7Jmd0OyAmZ3Q7Jmd0OyZndDs8YnI+DQomZ3Q7Jmd0OyAmZ3Q7Jmd0OyZndDsgKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqPHdicj4qKioqKioqKioqKioqKioqKioqKioqKioqKioqKio8 d2JyPioqKioqKioqKioqKjxicj4NCiZndDsmZ3Q7ICZndDsmZ3Q7Jmd0OyBza2lwcGluZzogW2xv Y2FsaG9zdF0gPSZndDsgKGl0ZW09aWRfcnNhKTxicj4NCiZndDsmZ3Q7ICZndDsmZ3Q7Jmd0OyBz a2lwcGluZzogW2xvY2FsaG9zdF0gPSZndDsgKGl0ZW09aWRfcnNhLnB1Yik8YnI+DQomZ3Q7Jmd0 OyAmZ3Q7Jmd0OyZndDs8YnI+DQomZ3Q7Jmd0OyAmZ3Q7Jmd0OyZndDsgVEFTSyBbYm9vdHN0cmFw LWhvc3QgOiBDcmVhdGUgc3NoIGtleSBwYWlyIGZvciByb290XTxicj4NCiZndDsmZ3Q7ICZndDsm Z3Q7Jmd0Ozxicj4NCiZndDsmZ3Q7ICZndDsmZ3Q7Jmd0OyAqKioqKioqKioqKioqKioqKioqKioq KioqKioqKio8d2JyPioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKjx3YnI+KioqKioqKioq KioqKioqKioqKioqKioqKioqKioqPHdicj4qKioqKioqKioqKioqKjxicj4NCiZndDsmZ3Q7ICZn dDsmZ3Q7Jmd0OyBvazogW2xvY2FsaG9zdF08YnI+DQomZ3Q7Jmd0OyAmZ3Q7Jmd0OyZndDs8YnI+ DQomZ3Q7Jmd0OyAmZ3Q7Jmd0OyZndDsgVEFTSyBbYm9vdHN0cmFwLWhvc3QgOiBGZXRjaCB0aGUg Z2VuZXJhdGVkIHB1YmxpYyBzc2gga2V5XTxicj4NCiZndDsmZ3Q7ICZndDsmZ3Q7Jmd0Ozxicj4N CiZndDsmZ3Q7ICZndDsmZ3Q7Jmd0OyAqKioqKioqKioqKioqKioqKioqKioqKioqKioqKio8d2Jy PioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKjx3YnI+KioqKioqKioqKioqKioqKioqKioq KioqKioqKioqPHdicj4qKioqKioqKjxicj4NCiZndDsmZ3Q7ICZndDsmZ3Q7Jmd0OyBjaGFuZ2Vk OiBbbG9jYWxob3N0XTxicj4NCiZndDsmZ3Q7ICZndDsmZ3Q7Jmd0Ozxicj4NCiZndDsmZ3Q7ICZn dDsmZ3Q7Jmd0OyBUQVNLIFtib290c3RyYXAtaG9zdCA6IEVuc3VyZSByb290J3MgbmV3IHB1Ymxp YyBzc2gga2V5IGlzIGluPGJyPg0KJmd0OyZndDsgJmd0OyZndDsmZ3Q7IGF1dGhvcml6ZWRfa2V5 c108YnI+DQomZ3Q7Jmd0OyAmZ3Q7Jmd0OyZndDs8YnI+DQomZ3Q7Jmd0OyAmZ3Q7Jmd0OyZndDsg KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqPHdicj4qKioqKioqKioqKioqKioqKioqKioq KioqKioqKio8d2JyPioqKioqKioqKioqKioqKioqKjxicj4NCiZndDsmZ3Q7ICZndDsmZ3Q7Jmd0 OyBvazogW2xvY2FsaG9zdF08YnI+DQomZ3Q7Jmd0OyAmZ3Q7Jmd0OyZndDs8YnI+DQomZ3Q7Jmd0 OyAmZ3Q7Jmd0OyZndDsgVEFTSyBbYm9vdHN0cmFwLWhvc3QgOiBDcmVhdGUgdGhlIHJlcXVpcmVk IGRlcGxveW1lbnQgZGlyZWN0b3JpZXNdPGJyPg0KJmd0OyZndDsgJmd0OyZndDsmZ3Q7PGJyPg0K Jmd0OyZndDsgJmd0OyZndDsmZ3Q7ICoqKioqKioqKioqKioqKioqKioqKioqKioqKioqKjx3YnI+ KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqPHdicj4qKioqKioqKioqKioqKioqKioqKioq KioqKioqKio8YnI+DQomZ3Q7Jmd0OyAmZ3Q7Jmd0OyZndDsgY2hhbmdlZDogW2xvY2FsaG9zdF0g PSZndDsgKGl0ZW09L2V0Yy9vcGVuc3RhY2tfZGVwbG95KTxicj4NCiZndDsmZ3Q7ICZndDsmZ3Q7 Jmd0OyBjaGFuZ2VkOiBbbG9jYWxob3N0XSA9Jmd0OyAoaXRlbT0vZXRjL29wZW5zdGFja19kZXBs b3kvPHdicj5jb25mLmQpPGJyPg0KJmd0OyZndDsgJmd0OyZndDsmZ3Q7IGNoYW5nZWQ6IFtsb2Nh bGhvc3RdID0mZ3Q7IChpdGVtPS9ldGMvb3BlbnN0YWNrX2RlcGxveS88d2JyPmVudi5kKTxicj4N CiZndDsmZ3Q7ICZndDsmZ3Q7Jmd0Ozxicj4NCiZndDsmZ3Q7ICZndDsmZ3Q7Jmd0OyBUQVNLIFti b290c3RyYXAtaG9zdCA6IERlcGxveSB1c2VyIGNvbmYuZCBjb25maWd1cmF0aW9uXTxicj4NCiZn dDsmZ3Q7ICZndDsmZ3Q7Jmd0Ozxicj4NCiZndDsmZ3Q7ICZndDsmZ3Q7Jmd0OyAqKioqKioqKioq KioqKioqKioqKioqKioqKioqKio8d2JyPioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKjx3 YnI+KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqPHdicj4qKioqKioqKioqPGJyPg0KJmd0 OyZndDsgJmd0OyZndDsmZ3Q7IGZhdGFsOiBbbG9jYWxob3N0XTogRkFJTEVEISA9Jmd0OyB7Im1z ZyI6ICJ7ezxicj4NCiZndDsmZ3Q7ICZndDsmZ3Q7Jmd0OyBjb25mZF9vdmVycmlkZXNbYm9vdHN0 cmFwXzx3YnI+aG9zdF9zY2VuYXJpb10gfX06ICdkaWN0IG9iamVjdCcgaGFzIG5vPGJyPg0KJmd0 OyZndDsgJmd0OyZndDsmZ3Q7IGF0dHJpYnV0ZSB1J2FpbycifTxicj4NCiZndDsmZ3Q7ICZndDsm Z3Q7Jmd0Ozxicj4NCiZndDsmZ3Q7ICZndDsmZ3Q7Jmd0OyBSVU5OSU5HIEhBTkRMRVIgW3NzaGQg OiBSZWxvYWQgdGhlIFNTSCBzZXJ2aWNlXTxicj4NCiZndDsmZ3Q7ICZndDsmZ3Q7Jmd0Ozxicj4N CiZndDsmZ3Q7ICZndDsmZ3Q7Jmd0OyAqKioqKioqKioqKioqKioqKioqKioqKioqKioqKio8d2Jy PioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKjx3YnI+KioqKioqKioqKioqKioqKioqKioq KioqKioqKioqPHdicj4qKioqKioqKioqKioqKioqKioqPGJyPg0KJmd0OyZndDsgJmd0OyZndDsm Z3Q7Jm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwO3RvIHJldHJ5LCB1c2U6IC0tbGlt aXQ8YnI+DQomZ3Q7Jmd0OyAmZ3Q7Jmd0OyZndDsgQC9vcHQvb3BlbnN0YWNrLWFuc2libGUvdGVz dHMvPHdicj5ib290c3RyYXAtYWlvLnJldHJ5PGJyPg0KJmd0OyZndDsgJmd0OyZndDsmZ3Q7PGJy Pg0KJmd0OyZndDsgJmd0OyZndDsmZ3Q7IFBMQVkgUkVDQVA8YnI+DQomZ3Q7Jmd0OyAmZ3Q7Jmd0 OyZndDsgKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqPHdicj4qKioqKioqKioqKioqKioq KioqKioqKioqKioqKio8d2JyPioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKjx3YnI+Kioq KioqKioqKioqKioqKioqKioqKioqKioqKioqPHdicj4qKioqKioqKioqKioqKioqKioqKioqKioq Kjxicj4NCiZndDsmZ3Q7ICZndDsmZ3Q7Jmd0OyBsb2NhbGhvc3QmbmJzcDsgJm5ic3A7ICZuYnNw OyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyA6IG9rPTYxJm5ic3A7 ICZuYnNwO2NoYW5nZWQ9MzYmbmJzcDsgJm5ic3A7dW5yZWFjaGFibGU9MDxicj4NCiZndDsmZ3Q7 ICZndDsmZ3Q7Jmd0OyBmYWlsZWQ9Mjxicj4NCiZndDsmZ3Q7ICZndDsmZ3Q7Jmd0Ozxicj4NCiZn dDsmZ3Q7ICZndDsmZ3Q7Jmd0OyBbcm9vdEBhaW8gb3BlbnN0YWNrLWFuc2libGVdIzxicj4NCiZn dDsmZ3Q7PGJyPg0KJmd0OyZndDsgX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fPHdicj5f X19fX19fX19fX19fX19fXzxicj4NCiZndDsmZ3Q7IE1haWxpbmcgbGlzdDo8YnI+DQomZ3Q7Jmd0 OyA8YSBocmVmPSJodHRwOi8vbGlzdHMub3BlbnN0YWNrLm9yZy9jZ2ktYmluL21haWxtYW4vbGlz dGluZm8vb3BlbnN0YWNrIiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwOi8v bGlzdHMub3BlbnN0YWNrLm9yZy88d2JyPmNnaS1iaW4vbWFpbG1hbi9saXN0aW5mby88d2JyPm9w ZW5zdGFjazwvYT48YnI+DQomZ3Q7Jmd0OyBQb3N0IHRvJm5ic3A7ICZuYnNwOyAmbmJzcDs6IDxh IGhyZWY9Im1haWx0bzpvcGVuc3RhY2tAbGlzdHMub3BlbnN0YWNrLm9yZyI+b3BlbnN0YWNrQGxp c3RzLm9wZW5zdGFjay5vcmc8L2E+PGJyPg0KJmd0OyZndDsgVW5zdWJzY3JpYmUgOjxicj4NCiZn dDsmZ3Q7IDxhIGhyZWY9Imh0dHA6Ly9saXN0cy5vcGVuc3RhY2sub3JnL2NnaS1iaW4vbWFpbG1h bi9saXN0aW5mby9vcGVuc3RhY2siIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0 dHA6Ly9saXN0cy5vcGVuc3RhY2sub3JnLzx3YnI+Y2dpLWJpbi9tYWlsbWFuL2xpc3RpbmZvLzx3 YnI+b3BlbnN0YWNrPC9hPjxicj4NCiZndDs8YnI+DQomZ3Q7PGJyPg0KPC9kaXY+PC9kaXY+PC9i bG9ja3F1b3RlPjwvZGl2Pjxicj48L2Rpdj4NCg0KPC9kaXY+PC9ibG9ja3F1b3RlPjxibG9ja3F1 b3RlIHR5cGU9ImNpdGUiPjxkaXY+PHNwYW4+X19fX19fX19fX19fX19fX19fX19fX19fX19fX19f X19fX19fX19fX19fX19fX188L3NwYW4+PGJyPjxzcGFuPk1haWxpbmcgbGlzdDogPGEgaHJlZj0i aHR0cDovL2xpc3RzLm9wZW5zdGFjay5vcmcvY2dpLWJpbi9tYWlsbWFuL2xpc3RpbmZvL29wZW5z dGFjayI+aHR0cDovL2xpc3RzLm9wZW5zdGFjay5vcmcvY2dpLWJpbi9tYWlsbWFuL2xpc3RpbmZv L29wZW5zdGFjazwvYT48L3NwYW4+PGJyPjxzcGFuPlBvc3QgdG8gJm5ic3A7Jm5ic3A7Jm5ic3A7 Jm5ic3A7OiA8YSBocmVmPSJtYWlsdG86b3BlbnN0YWNrQGxpc3RzLm9wZW5zdGFjay5vcmciPm9w ZW5zdGFja0BsaXN0cy5vcGVuc3RhY2sub3JnPC9hPjwvc3Bhbj48YnI+PHNwYW4+VW5zdWJzY3Jp YmUgOiA8YSBocmVmPSJodHRwOi8vbGlzdHMub3BlbnN0YWNrLm9yZy9jZ2ktYmluL21haWxtYW4v bGlzdGluZm8vb3BlbnN0YWNrIj5odHRwOi8vbGlzdHMub3BlbnN0YWNrLm9yZy9jZ2ktYmluL21h aWxtYW4vbGlzdGluZm8vb3BlbnN0YWNrPC9hPjwvc3Bhbj48YnI+PC9kaXY+PC9ibG9ja3F1b3Rl PjwvYm9keT48L2h0bWw+ --=_68d84b4844d99d5e832c62a4fff924b4-- From remo at italy1.com Sun Feb 4 18:56:01 2018 From: remo at italy1.com (remo at italy1.com) Date: Sun, 4 Feb 2018 10:56:01 -0800 Subject: [Openstack] openstack-ansible aio error In-Reply-To: References: Message-ID: <049D1DE1-C019-4F9A-ADD5-3CF0D7B82904@italy1.com> Content-Type: multipart/alternative; boundary="=_68d84b4844d99d5e832c62a4fff924b4" --=_68d84b4844d99d5e832c62a4fff924b4 Content-Transfer-Encoding: base64 Content-Type: text/plain; charset=utf-8 SSByZWNhbGwgdGhhdCBhbnNpYmxlIHByb2plY3Qgd29ya2VkIGJldHRlciB3aXRoIHVidW50dSBh bmQgbm90IHdpdGggQ2VudE9TLiANCg0KDQo+IElsIGdpb3JubyAwNCBmZWIgMjAxOCwgYWxsZSBv cmUgMTA6MjEsIE1hcmNpbiBEdWxhayA8bWFyY2luLmR1bGFrQGdtYWlsLmNvbT4gaGEgc2NyaXR0 bzoNCj4gDQo+IFNpbmNlIHlvdSBhbHJlYWR5IGludmVzdGVkIHNvbWUgdGltZSwgcmVwZWF0IHlv dXIgaW5zdGFsbGF0aW9uIGZyb20gc2NyYXRjaCBhbmQgc3VibWl0IHRoZSBidWcgaWYgbmVjZXNz YXJ5LCBtYXliZSBzb21lb25lIHdpbGwgbG9vayBhdCBpdC4NCj4gQWZ0ZXIgdGhhdCBjaGVjayB0 aGUgb3RoZXIgZGVwbG95bWVudCB0b29scywgbGlrZSB0aGUganVzdCBtZW50aW9uZWQgVHJpcGxl Ty4NCj4gSSdtIG5vdCBzdXJlIHdoZXRoZXIgb3BlbnN0YWNrLWFuc2libGUgKG9yIGFueSBvdGhl ciB0b29sKSBpcyBwcm9kdWN0aW9uIHJlYWR5IC0ganVzdCBsb29rIGF0IHRoZSB0eXBlcyBvZiBi dWdzIHRob3NlIHByb2plY3RzIGFyZSBjdXJyZW50bHkgZGVhbGluZyB3aXRoLA0KPiBidXQgeW91 IG1heSBiZSBtb3JlIGx1Y2t5IHdpdGggYW4gVWJ1bnR1IGRlcGxveW1lbnQuDQo+IA0KPiBNYXJj aW4NCj4gDQo+PiBPbiBTdW4sIEZlYiA0LCAyMDE4IGF0IDQ6NTMgUE0sIFNhdGlzaCBQYXRlbCA8 c2F0aXNoLnR4dEBnbWFpbC5jb20+IHdyb3RlOg0KPj4gSGkgTWFyY2luLA0KPj4gDQo+PiBUaGFu ayB5b3UsIGkgd2lsbCB0cnkgb3RoZXIgbGluaywgYWxzbyBpIGFtIHVzaW5nIENlbnRPUzcgYnV0 IGFueXdheQ0KPj4gbm93IHF1ZXN0aW9uIGlzIGRvZXMgb3BlbnN0YWNrLWFuc2libGUgcmVhZHkg Zm9yIHByb2R1Y3Rpb24gZGVwbG95bWVudA0KPj4gZGVzcGl0ZSBnYWxlcmEgaXNzdWVzIGFuZCBi dWc/DQo+PiANCj4+IElmIGkgd2FudCB0byBnbyBvbiBwcm9kdWN0aW9uIHNob3VsZCBpIHdhaXQg b3IgZmluZCBvdGhlciB0b29scyB0bw0KPj4gZGVwbG95IG9uIHByb2R1Y3Rpb24/DQo+PiANCj4+ IE9uIFN1biwgRmViIDQsIDIwMTggYXQgNToyOSBBTSwgTWFyY2luIER1bGFrIDxtYXJjaW4uZHVs YWtAZ21haWwuY29tPiB3cm90ZToNCj4+ID4gV2hlbiBwbGF5aW5nIHdpdGggb3BlbnN0YWNrLWFu c2libGUgZG8gaXQgaW4gYSB2aXJ0dWFsIHNldHVwIChlLmcuIG5lc3RlZA0KPj4gPiB2aXJ0dWFs aXphdGlvbiB3aXRoIGxpYnZpcnQpIHNvIHlvdSBjYW4gcmVwcm9kdWNpYmx5IGJyaW5nIHVwIHlv dXINCj4+ID4gZW52aXJvbm1lbnQgZnJvbSBzY3JhdGNoLg0KPj4gPiBZb3Ugd2lsbCBoYXZlIHRv IGRvIGl0IG11bHRpcGxlIHRpbWVzLg0KPj4gPg0KPj4gPiBodHRwczovL2RldmVsb3Blci5yYWNr c3BhY2UuY29tL2Jsb2cvbGlmZS13aXRob3V0LWRldnN0YWNrLW9wZW5zdGFjay1kZXZlbG9wbWVu dC13aXRoLW9zYS8NCj4+ID4gaXMgbW9yZSB0aGFuIDIgeWVhcnMgb2xkLg0KPj4gPg0KPj4gPiBU cnkgdG8gZm9sbG93DQo+PiA+IGh0dHBzOi8vZG9jcy5vcGVuc3RhY2sub3JnL29wZW5zdGFjay1h bnNpYmxlL2xhdGVzdC9jb250cmlidXRvci9xdWlja3N0YXJ0LWFpby5odG1sDQo+PiA+IGJ1dCBn aXQgY2xvbmUgdGhlIGxhdGVzdCBzdGF0ZSBvZiB0aGUgb3BlbnN0YWNrLWFuc2libGUgcmVwby4N Cj4+ID4gVGhlIGFib3ZlIHBhZ2UgaGFzIGEgbGluayB0aGF0IGNhbiBiZSB1c2VkIHRvIHN1Ym1p dCBidWdzIGRpcmVjdGx5IHRvIHRoZQ0KPj4gPiBvcGVuc3RhY2stYW5zaWJsZSBwcm9qZWN0IGF0 IGxhdW5jaHBhZC4NCj4+ID4gSW4gdGhpcyB3YXkgeW91IG1heSBiZSBhYmxlIHRvIGNsZWFudXAv aW1wcm92ZSB0aGUgZG9jdW1lbnRhdGlvbiwNCj4+ID4gYW5kIHNpbmNlIHlvdXIgc2V0dXAgaXMg dGhlIHNpbXBsZXN0IHBvc3NpYmxlIG9uZSB5b3VyIGJ1ZyByZXBvcnRzIG1heSBnZXQNCj4+ID4g bm90aWNlZCBhbmQgcmVwcm9kdWNlZCBieSB0aGUgZGV2ZWxvcGVycy4NCj4+ID4gV2hhdCBoYXBw ZW5zIGlzIHRoYXQgbW9zdCBwZW9wbGUgdHJ5IG9wZW5zdGFjay1hbnNpYmxlLCBkb24ndCByZXBv cnQgYnVncywNCj4+ID4gb3IgcmVwb3J0IHRoZSBidWdzIHdpdGhvdXQgdGhlIGluZm9ybWF0aW9u IG5lY2Nlc2FyeQ0KPj4gPiB0byByZXByb2R1Y2UgdGhlbSwgYW5kIGFiYW5kb24gdGhlIHdob2xl IGlkZWEuDQo+PiA+DQo+PiA+IFRyeSB0byBzZWFyY2gNCj4+ID4gaHR0cHM6Ly9idWdzLmxhdW5j aHBhZC5uZXQvb3BlbnN0YWNrLWFuc2libGUvK2J1Z3M/ZmllbGQuc2VhcmNodGV4dD1nYWxlcmEN Cj4+ID4gZm9yIGluc3BpcmF0aW9uIGFib3V0IHdoYXQgdG8gZG8uDQo+PiA+IEN1cnJlbnRseSB0 aGUgZ2FsZXJhIHNldHVwIGluIG9wZW5zdGFjay1hbnNpYmxlLCBlc3BlY2lhbGx5IG9uIGNlbnRv czcgc2VlbXMNCj4+ID4gdG8gYmUgdW5kZXJnb2luZyBzb21lIGNyaXRpY2FsIGNoYW5nZXMuDQo+ PiA+IEVudGVyIHRoZSBnYWxlcmEgY29udGFpbmVyOg0KPj4gPiBseGMtYXR0YWNoIC1uIGFpbzFf Z2FsZXJhX2NvbnRhaW5lci00ZjQ4OGY2YQ0KPj4gPiBsb29rIGFyb3VuZCBpdCwgY2hlY2sgd2hl dGhlciBteXNxbGQgaXMgcnVubmluZyBldGMuLCB0cnkgdG8gaWRlbnRpZnkgd2hpY2gNCj4+ID4g YW5zaWJsZSB0YXNrcyBmYWlsZWQgYW5kIHJ1biB0aGVtIG1hbnVhbGx5IGluc2lkZSBvZiB0aGUg Y29udGFpbmVyLg0KPj4gPg0KPj4gPiBNYXJjaW4NCj4+ID4NCj4+ID4NCj4+ID4gT24gU3VuLCBG ZWIgNCwgMjAxOCBhdCAzOjQxIEFNLCBTYXRpc2ggUGF0ZWwgPHNhdGlzaC50eHRAZ21haWwuY29t PiB3cm90ZToNCj4+ID4+DQo+PiA+PiBJIGhhdmUgbm90aWNlZCBpbiBvdXRwdXQgImFpbzFfZ2Fs ZXJhX2NvbnRhaW5lciIgaXMgZmFpbGVkLCBob3cgZG8gaQ0KPj4gPj4gZml4ZWQgdGhpcyBraW5k IG9mIGlzc3VlPw0KPj4gPj4NCj4+ID4+DQo+PiA+Pg0KPj4gPj4gUExBWSBSRUNBUA0KPj4gPj4g KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioN Cj4+ID4+IGFpbzEgICAgICAgICAgICAgICAgICAgICAgIDogb2s9NDEgICBjaGFuZ2VkPTQgICAg dW5yZWFjaGFibGU9MA0KPj4gPj4gZmFpbGVkPTANCj4+ID4+IGFpbzFfY2luZGVyX2FwaV9jb250 YWluZXItMmFmNGRkMDEgOiBvaz0wICAgIGNoYW5nZWQ9MA0KPj4gPj4gdW5yZWFjaGFibGU9MCAg ICBmYWlsZWQ9MA0KPj4gPj4gYWlvMV9jaW5kZXJfc2NoZWR1bGVyX2NvbnRhaW5lci00NTRkYjFm YiA6IG9rPTAgICAgY2hhbmdlZD0wDQo+PiA+PiB1bnJlYWNoYWJsZT0wICAgIGZhaWxlZD0wDQo+ PiA+PiBhaW8xX2Rlc2lnbmF0ZV9jb250YWluZXItZjdlYTNmNzMgOiBvaz0wICAgIGNoYW5nZWQ9 MCAgICB1bnJlYWNoYWJsZT0wDQo+PiA+PiAgICBmYWlsZWQ9MA0KPj4gPj4gYWlvMV9nYWxlcmFf Y29udGFpbmVyLTRmNDg4ZjZhIDogb2s9MzIgICBjaGFuZ2VkPTMgICAgdW5yZWFjaGFibGU9MA0K Pj4gPj4gZmFpbGVkPTENCj4+ID4+DQo+PiA+PiBPbiBTYXQsIEZlYiAzLCAyMDE4IGF0IDk6MjYg UE0sIFNhdGlzaCBQYXRlbCA8c2F0aXNoLnR4dEBnbWFpbC5jb20+IHdyb3RlOg0KPj4gPj4gPiBJ IGhhdmUgcmUtaW5zdGFsbCBjZW50b3M3IGFuZCBnaXZlIGl0IGEgdHJ5IGFuZCBnb3QgdGhpcyBl cnJvcg0KPj4gPj4gPg0KPj4gPj4gPiBERUJVRyBNRVNTQUdFIFJFQ0FQDQo+PiA+PiA+ICoqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKg0K Pj4gPj4gPiBERUJVRzogW0xvYWQgbG9jYWwgcGFja2FnZXNdDQo+PiA+PiA+ICoqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKg0KPj4gPj4gPiBBbGwgaXRl bXMgY29tcGxldGVkDQo+PiA+PiA+DQo+PiA+PiA+IFNhdHVyZGF5IDAzIEZlYnJ1YXJ5IDIwMTgg IDIxOjA0OjA3IC0wNTAwICgwOjAwOjA0LjE3NSkNCj4+ID4+ID4gMDoxNjoxNy4yMDQgKioqKioN Cj4+ID4+ID4NCj4+ID4+ID4gPT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PQ0KPj4gPj4gPiByZXBvX2J1 aWxkIDogQ3JlYXRlIE9wZW5TdGFjay1BbnNpYmxlIHJlcXVpcmVtZW50IHdoZWVscyAtLS0tLS0t LS0tLS0tLQ0KPj4gPj4gPiAyNjguMTZzDQo+PiA+PiA+IHJlcG9fYnVpbGQgOiBXYWl0IGZvciB0 aGUgdmVudnMgYnVpbGRzIHRvIGNvbXBsZXRlIC0tLS0tLS0tLS0tLS0tLS0tLS0tDQo+PiA+PiA+ IDExMC4zMHMNCj4+ID4+ID4gcmVwb19idWlsZCA6IEluc3RhbGwgcGFja2FnZXMgLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tDQo+PiA+PiA+IDY4LjI2cw0KPj4gPj4g PiByZXBvX2J1aWxkIDogQ2xvbmUgZ2l0IHJlcG9zaXRvcmllcyBhc3luY2hyb25vdXNseSAtLS0t LS0tLS0tLS0tLS0tLS0tLS0NCj4+ID4+ID4gNTkuODVzDQo+PiA+PiA+IHBpcF9pbnN0YWxsIDog SW5zdGFsbCBkaXN0cm8gcGFja2FnZXMgLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LQ0KPj4gPj4gPiAzNi43MnMNCj4+ID4+ID4gZ2FsZXJhX2NsaWVudCA6IEluc3RhbGwgZ2FsZXJh IGRpc3RybyBwYWNrYWdlcyAtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tDQo+PiA+PiA+IDMzLjIx cw0KPj4gPj4gPiBoYXByb3h5X3NlcnZlciA6IENyZWF0ZSBoYXByb3h5IHNlcnZpY2UgY29uZmln IGZpbGVzIC0tLS0tLS0tLS0tLS0tLS0tLS0NCj4+ID4+ID4gMzAuODFzDQo+PiA+PiA+IHJlcG9f YnVpbGQgOiBFeGVjdXRlIHRoZSB2ZW52IGJ1aWxkIHNjcmlwdHMgYXN5bmNob25vdXNseSAtLS0t LS0tLS0tLS0tLQ0KPj4gPj4gPiAyOS42OXMNCj4+ID4+ID4gcGlwX2luc3RhbGwgOiBJbnN0YWxs IGRpc3RybyBwYWNrYWdlcyAtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tDQo+PiA+ PiA+IDIzLjU2cw0KPj4gPj4gPiByZXBvX3NlcnZlciA6IEluc3RhbGwgcmVwbyBzZXJ2ZXIgcGFj a2FnZXMgLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0NCj4+ID4+ID4gMjAuMTFzDQo+PiA+ PiA+IG1lbWNhY2hlZF9zZXJ2ZXIgOiBJbnN0YWxsIGRpc3RybyBwYWNrYWdlcyAtLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLQ0KPj4gPj4gPiAxNi4zNXMNCj4+ID4+ID4gcmVwb19idWlsZCA6 IENyZWF0ZSB2ZW52IGJ1aWxkIG9wdGlvbnMgZmlsZXMgLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tDQo+PiA+PiA+IDE0LjU3cw0KPj4gPj4gPiBoYXByb3h5X3NlcnZlciA6IEluc3RhbGwgSEFQ cm94eSBQYWNrYWdlcw0KPj4gPj4gPiAtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tIDgu MzVzDQo+PiA+PiA+IHJzeXNsb2dfY2xpZW50IDogSW5zdGFsbCByc3lzbG9nIHBhY2thZ2VzDQo+ PiA+PiA+IC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0gOC4zM3MNCj4+ID4+ID4gcnN5 c2xvZ19jbGllbnQgOiBJbnN0YWxsIHJzeXNsb2cgcGFja2FnZXMNCj4+ID4+ID4gLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLSA3LjY0cw0KPj4gPj4gPiByc3lzbG9nX2NsaWVudCA6IElu c3RhbGwgcnN5c2xvZyBwYWNrYWdlcw0KPj4gPj4gPiAtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tIDcuNDJzDQo+PiA+PiA+IHJlcG9fYnVpbGQgOiBXYWl0IGZvciBnaXQgY2xvbmVzIHRv IGNvbXBsZXRlDQo+PiA+PiA+IC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0gNy4yNXMNCj4+ ID4+ID4gcmVwb19zZXJ2ZXIgOiBJbnN0YWxsIHJlcG8gY2FjaGluZyBzZXJ2ZXIgcGFja2FnZXMN Cj4+ID4+ID4gLS0tLS0tLS0tLS0tLS0tLS0tLS0tLSA0Ljc2cw0KPj4gPj4gPiBnYWxlcmFfc2Vy dmVyIDogQ2hlY2sgdGhhdCBXU1JFUCBpcyByZWFkeQ0KPj4gPj4gPiAtLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tIDQuMThzDQo+PiA+PiA+IHJlcG9fc2VydmVyIDogR2l0IHNlcnZpY2Ug ZGF0YSBmb2xkZXIgc2V0dXANCj4+ID4+ID4gLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0g NC4wNHMNCj4+ID4+ID4gKysgZXhpdF9mYWlsIDM0MSAwDQo+PiA+PiA+ICsrIHNldCAreA0KPj4g Pj4gPiArKyBpbmZvX2Jsb2NrICdFcnJvciBJbmZvIC0gMzQxJyAwDQo+PiA+PiA+ICsrIGVjaG8N Cj4+ID4+ID4gLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLQ0KPj4gPj4gPiAtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tDQo+PiA+PiA+ICsr IHByaW50X2luZm8gJ0Vycm9yIEluZm8gLSAzNDEnIDANCj4+ID4+ID4gKysgUFJPQ19OQU1FPSct IFsgRXJyb3IgSW5mbyAtIDM0MSAwIF0gLScNCj4+ID4+ID4gKysgcHJpbnRmICdcbiVzJXNcbicg Jy0gWyBFcnJvciBJbmZvIC0gMzQxIDAgXSAtJw0KPj4gPj4gPiAtLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQ0KPj4gPj4gPg0KPj4gPj4gPiAtIFsgRXJyb3IgSW5m byAtIDM0MSAwIF0gLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t DQo+PiA+PiA+ICsrIGVjaG8NCj4+ID4+ID4gLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQ0KPj4gPj4gPiAtLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tDQo+PiA+PiA+ICsrIGV4aXRfc3RhdGUgMQ0KPj4gPj4gPiArKyBzZXQgK3gNCj4+ID4+ ID4gLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLQ0KPj4gPj4gPg0KPj4gPj4gPiAtIFsgUnVuIFRpbWUgPSAyMDMwIHNl Y29uZHMgfHwgMzMgbWludXRlcyBdIC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tDQo+PiA+PiA+ IC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0NCj4+ID4+ID4gLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQ0KPj4gPj4gPg0KPj4gPj4gPiAt IFsgU3RhdHVzOiBGYWlsdXJlIF0gLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tDQo+PiA+PiA+IC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0NCj4+ID4+ID4NCj4+ID4+ID4NCj4+ ID4+ID4NCj4+ID4+ID4NCj4+ID4+ID4gSSBkb24ndCBrbm93IHdoeSBpdCBmYWlsZWQNCj4+ID4+ ID4NCj4+ID4+ID4gYnV0IGkgdHJpZWQgZm9sbG93aW5nOg0KPj4gPj4gPg0KPj4gPj4gPiBbcm9v dEBhaW8gfl0jIGx4Yy1scyAtZg0KPj4gPj4gPiBOQU1FICAgICAgICAgICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgIFNUQVRFICAgQVVUT1NUQVJUIEdST1VQUw0KPj4gPj4gPiAgICAgICAg ICBJUFY0ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIElQVjYNCj4+ ID4+ID4gYWlvMV9jaW5kZXJfYXBpX2NvbnRhaW5lci0yYWY0ZGQwMSAgICAgICAgICBSVU5OSU5H IDEgICAgICAgICBvbmJvb3QsDQo+PiA+PiA+IG9wZW5zdGFjayAxMC4yNTUuMjU1LjYyLCAxNzIu MjkuMjM4LjIxMCwgMTcyLjI5LjI0NC4xNTIgIC0NCj4+ID4+ID4gYWlvMV9jaW5kZXJfc2NoZWR1 bGVyX2NvbnRhaW5lci00NTRkYjFmYiAgICBSVU5OSU5HIDEgICAgICAgICBvbmJvb3QsDQo+PiA+ PiA+IG9wZW5zdGFjayAxMC4yNTUuMjU1LjExNywgMTcyLjI5LjIzOS4xNzIgICAgICAgICAgICAg ICAgIC0NCj4+ID4+ID4gYWlvMV9kZXNpZ25hdGVfY29udGFpbmVyLWY3ZWEzZjczICAgICAgICAg ICBSVU5OSU5HIDEgICAgICAgICBvbmJvb3QsDQo+PiA+PiA+IG9wZW5zdGFjayAxMC4yNTUuMjU1 LjIzNSwgMTcyLjI5LjIzOS4xNjYgICAgICAgICAgICAgICAgIC0NCj4+ID4+ID4gYWlvMV9nYWxl cmFfY29udGFpbmVyLTRmNDg4ZjZhICAgICAgICAgICAgICBSVU5OSU5HIDEgICAgICAgICBvbmJv b3QsDQo+PiA+PiA+IG9wZW5zdGFjayAxMC4yNTUuMjU1LjE5MywgMTcyLjI5LjIzNi42OSAgICAg ICAgICAgICAgICAgIC0NCj4+ID4+ID4gYWlvMV9nbGFuY2VfY29udGFpbmVyLWY4Y2FhOWU2ICAg ICAgICAgICAgICBSVU5OSU5HIDEgICAgICAgICBvbmJvb3QsDQo+PiA+PiA+IG9wZW5zdGFjayAx MC4yNTUuMjU1LjIyNSwgMTcyLjI5LjIzOS41MiwgMTcyLjI5LjI0Ni4yNSAgIC0NCj4+ID4+ID4g YWlvMV9oZWF0X2FwaV9jb250YWluZXItODMyMWE3NjMgICAgICAgICAgICBSVU5OSU5HIDEgICAg ICAgICBvbmJvb3QsDQo+PiA+PiA+IG9wZW5zdGFjayAxMC4yNTUuMjU1LjEwNCwgMTcyLjI5LjIz Ni4xODYgICAgICAgICAgICAgICAgIC0NCj4+ID4+ID4gYWlvMV9oZWF0X2FwaXNfY29udGFpbmVy LTNmNzBhZDc0ICAgICAgICAgICBSVU5OSU5HIDEgICAgICAgICBvbmJvb3QsDQo+PiA+PiA+IG9w ZW5zdGFjayAxMC4yNTUuMjU1LjE2NiwgMTcyLjI5LjIzOS4xMyAgICAgICAgICAgICAgICAgIC0N Cj4+ID4+ID4gYWlvMV9oZWF0X2VuZ2luZV9jb250YWluZXItYTE4ZTVhMGEgICAgICAgICBSVU5O SU5HIDEgICAgICAgICBvbmJvb3QsDQo+PiA+PiA+IG9wZW5zdGFjayAxMC4yNTUuMjU1LjExOCwg MTcyLjI5LjIzOC43ICAgICAgICAgICAgICAgICAgIC0NCj4+ID4+ID4gYWlvMV9ob3Jpem9uX2Nv bnRhaW5lci1lNDkzMjc1YyAgICAgICAgICAgICBSVU5OSU5HIDEgICAgICAgICBvbmJvb3QsDQo+ PiA+PiA+IG9wZW5zdGFjayAxMC4yNTUuMjU1Ljk4LCAxNzIuMjkuMjM3LjQzICAgICAgICAgICAg ICAgICAgIC0NCj4+ID4+ID4gYWlvMV9rZXlzdG9uZV9jb250YWluZXItYzBlMjNlMTQgICAgICAg ICAgICBSVU5OSU5HIDEgICAgICAgICBvbmJvb3QsDQo+PiA+PiA+IG9wZW5zdGFjayAxMC4yNTUu MjU1LjYwLCAxNzIuMjkuMjM3LjE2NSAgICAgICAgICAgICAgICAgIC0NCj4+ID4+ID4gYWlvMV9t ZW1jYWNoZWRfY29udGFpbmVyLWVmOGZlZDRjICAgICAgICAgICBSVU5OSU5HIDEgICAgICAgICBv bmJvb3QsDQo+PiA+PiA+IG9wZW5zdGFjayAxMC4yNTUuMjU1LjIxNCwgMTcyLjI5LjIzOC4yMTEg ICAgICAgICAgICAgICAgIC0NCj4+ID4+ID4gYWlvMV9uZXV0cm9uX2FnZW50c19jb250YWluZXIt MTMxZTk5NmUgICAgICBSVU5OSU5HIDEgICAgICAgICBvbmJvb3QsDQo+PiA+PiA+IG9wZW5zdGFj ayAxMC4yNTUuMjU1LjE1MywgMTcyLjI5LjIzNy4yNDYsIDE3Mi4yOS4yNDMuMjI3IC0NCj4+ID4+ ID4gYWlvMV9uZXV0cm9uX3NlcnZlcl9jb250YWluZXItY2NkNjkzOTQgICAgICBSVU5OSU5HIDEg ICAgICAgICBvbmJvb3QsDQo+PiA+PiA+IG9wZW5zdGFjayAxMC4yNTUuMjU1LjI3LCAxNzIuMjku MjM2LjEyOSAgICAgICAgICAgICAgICAgIC0NCj4+ID4+ID4gYWlvMV9ub3ZhX2FwaV9jb250YWlu ZXItNzMyNzQwMjQgICAgICAgICAgICBSVU5OSU5HIDEgICAgICAgICBvbmJvb3QsDQo+PiA+PiA+ IG9wZW5zdGFjayAxMC4yNTUuMjU1LjQyLCAxNzIuMjkuMjM4LjIwMSAgICAgICAgICAgICAgICAg IC0NCj4+ID4+ID4gYWlvMV9ub3ZhX2FwaV9tZXRhZGF0YV9jb250YWluZXItYTFkMzIyODIgICBS VU5OSU5HIDEgICAgICAgICBvbmJvb3QsDQo+PiA+PiA+IG9wZW5zdGFjayAxMC4yNTUuMjU1LjIx OCwgMTcyLjI5LjIzOC4xNTMgICAgICAgICAgICAgICAgIC0NCj4+ID4+ID4gYWlvMV9ub3ZhX2Fw aV9vc19jb21wdXRlX2NvbnRhaW5lci01MjcyNTk0MCBSVU5OSU5HIDEgICAgICAgICBvbmJvb3Qs DQo+PiA+PiA+IG9wZW5zdGFjayAxMC4yNTUuMjU1LjEwOSwgMTcyLjI5LjIzNi4xMjYgICAgICAg ICAgICAgICAgIC0NCj4+ID4+ID4gYWlvMV9ub3ZhX2FwaV9wbGFjZW1lbnRfY29udGFpbmVyLTA1 OGU4MDMxICBSVU5OSU5HIDEgICAgICAgICBvbmJvb3QsDQo+PiA+PiA+IG9wZW5zdGFjayAxMC4y NTUuMjU1LjI5LCAxNzIuMjkuMjM2LjE1NyAgICAgICAgICAgICAgICAgIC0NCj4+ID4+ID4gYWlv MV9ub3ZhX2NvbmR1Y3Rvcl9jb250YWluZXItOWI2YjIwOGMgICAgICBSVU5OSU5HIDEgICAgICAg ICBvbmJvb3QsDQo+PiA+PiA+IG9wZW5zdGFjayAxMC4yNTUuMjU1LjE4LCAxNzIuMjkuMjM5Ljkg ICAgICAgICAgICAgICAgICAgIC0NCj4+ID4+ID4gYWlvMV9ub3ZhX2NvbnNvbGVfY29udGFpbmVy LTBmYjg5OTVjICAgICAgICBSVU5OSU5HIDEgICAgICAgICBvbmJvb3QsDQo+PiA+PiA+IG9wZW5z dGFjayAxMC4yNTUuMjU1LjQ3LCAxNzIuMjkuMjM3LjEyOSAgICAgICAgICAgICAgICAgIC0NCj4+ ID4+ID4gYWlvMV9ub3ZhX3NjaGVkdWxlcl9jb250YWluZXItOGY3YTY1N2EgICAgICBSVU5OSU5H IDEgICAgICAgICBvbmJvb3QsDQo+PiA+PiA+IG9wZW5zdGFjayAxMC4yNTUuMjU1LjE5NSwgMTcy LjI5LjIzOC4xMTMgICAgICAgICAgICAgICAgIC0NCj4+ID4+ID4gYWlvMV9yYWJiaXRfbXFfY29u dGFpbmVyLWMzNDUwZDY2ICAgICAgICAgICBSVU5OSU5HIDEgICAgICAgICBvbmJvb3QsDQo+PiA+ PiA+IG9wZW5zdGFjayAxMC4yNTUuMjU1LjExMSwgMTcyLjI5LjIzNy4yMDIgICAgICAgICAgICAg ICAgIC0NCj4+ID4+ID4gYWlvMV9yZXBvX2NvbnRhaW5lci04ZTA3ZmRlZiAgICAgICAgICAgICAg ICBSVU5OSU5HIDEgICAgICAgICBvbmJvb3QsDQo+PiA+PiA+IG9wZW5zdGFjayAxMC4yNTUuMjU1 LjE0MSwgMTcyLjI5LjIzOS43OSAgICAgICAgICAgICAgICAgIC0NCj4+ID4+ID4gYWlvMV9yc3lz bG9nX2NvbnRhaW5lci1iMTk4ZmJlNSAgICAgICAgICAgICBSVU5OSU5HIDEgICAgICAgICBvbmJv b3QsDQo+PiA+PiA+IG9wZW5zdGFjayAxMC4yNTUuMjU1LjEzLCAxNzIuMjkuMjM2LjE5NSAgICAg ICAgICAgICAgICAgIC0NCj4+ID4+ID4gYWlvMV9zd2lmdF9wcm94eV9jb250YWluZXItMWEzNTM2 ZTEgICAgICAgICBSVU5OSU5HIDEgICAgICAgICBvbmJvb3QsDQo+PiA+PiA+IG9wZW5zdGFjayAx MC4yNTUuMjU1LjEwOCwgMTcyLjI5LjIzNy4zMSwgMTcyLjI5LjI0NC4yNDggIC0NCj4+ID4+ID4g YWlvMV91dGlsaXR5X2NvbnRhaW5lci1iZDEwNmYxMSAgICAgICAgICAgICBSVU5OSU5HIDEgICAg ICAgICBvbmJvb3QsDQo+PiA+PiA+IG9wZW5zdGFjayAxMC4yNTUuMjU1LjU0LCAxNzIuMjkuMjM5 LjEyNCAgICAgICAgICAgICAgICAgIC0NCj4+ID4+ID4gW3Jvb3RAYWlvIH5dIyBseGMtYQ0KPj4g Pj4gPiBseGMtYXR0YWNoICAgICBseGMtYXV0b3N0YXJ0DQo+PiA+PiA+IFtyb290QGFpbyB+XSMg bHhjLWF0dGFjaCAtbiBhaW8xX3V0aWxpdHlfY29udGFpbmVyLWJkMTA2ZjExDQo+PiA+PiA+IFty b290QGFpbzEtdXRpbGl0eS1jb250YWluZXItYmQxMDZmMTEgfl0jDQo+PiA+PiA+IFtyb290QGFp bzEtdXRpbGl0eS1jb250YWluZXItYmQxMDZmMTEgfl0jIHNvdXJjZSAvcm9vdC9vcGVucmMNCj4+ ID4+ID4gW3Jvb3RAYWlvMS11dGlsaXR5LWNvbnRhaW5lci1iZDEwNmYxMSB+XSMgb3BlbnN0YWNr DQo+PiA+PiA+IG9wZW5zdGFjayAgICAgICAgICAgICAgICAgICAgICAgICBvcGVuc3RhY2staG9z dC1ob3N0ZmlsZS1zZXR1cC5zaA0KPj4gPj4gPiBbcm9vdEBhaW8xLXV0aWxpdHktY29udGFpbmVy LWJkMTA2ZjExIH5dIyBvcGVuc3RhY2sNCj4+ID4+ID4gb3BlbnN0YWNrICAgICAgICAgICAgICAg ICAgICAgICAgIG9wZW5zdGFjay1ob3N0LWhvc3RmaWxlLXNldHVwLnNoDQo+PiA+PiA+IFtyb290 QGFpbzEtdXRpbGl0eS1jb250YWluZXItYmQxMDZmMTEgfl0jIG9wZW5zdGFjayB1c2VyIGxpc3QN Cj4+ID4+ID4gRmFpbGVkIHRvIGRpc2NvdmVyIGF2YWlsYWJsZSBpZGVudGl0eSB2ZXJzaW9ucyB3 aGVuIGNvbnRhY3RpbmcNCj4+ID4+ID4gaHR0cDovLzE3Mi4yOS4yMzYuMTAwOjUwMDAvdjMuIEF0 dGVtcHRpbmcgdG8gcGFyc2UgdmVyc2lvbiBmcm9tIFVSTC4NCj4+ID4+ID4gU2VydmljZSBVbmF2 YWlsYWJsZSAoSFRUUCA1MDMpDQo+PiA+PiA+IFtyb290QGFpbzEtdXRpbGl0eS1jb250YWluZXIt YmQxMDZmMTEgfl0jDQo+PiA+PiA+DQo+PiA+PiA+DQo+PiA+PiA+IG5vdCBzdXJlIHdoYXQgaXMg dGhpcyBlcnJvciA/DQo+PiA+PiA+DQo+PiA+PiA+DQo+PiA+PiA+IE9uIFNhdCwgRmViIDMsIDIw MTggYXQgNzoyOSBQTSwgU2F0aXNoIFBhdGVsIDxzYXRpc2gudHh0QGdtYWlsLmNvbT4NCj4+ID4+ ID4gd3JvdGU6DQo+PiA+PiA+PiBJIGhhdmUgdGlyZWQgZXZlcnl0aGluZyBidXQgZGlkbid0IGFi bGUgdG8gZmluZCBzb2x1dGlvbiA6KCAgd2hhdCBpIGFtDQo+PiA+PiA+PiBkb2luZyB3cm9uZyBo ZXJlLCBpIGFtIGZvbGxvd2luZyB0aGlzIGluc3RydWN0aW9uIGFuZCBwbGVhc2UgbGV0IG1lDQo+ PiA+PiA+PiBrbm93IGlmIGkgYW0gd3JvbmcNCj4+ID4+ID4+DQo+PiA+PiA+Pg0KPj4gPj4gPj4g aHR0cHM6Ly9kZXZlbG9wZXIucmFja3NwYWNlLmNvbS9ibG9nL2xpZmUtd2l0aG91dC1kZXZzdGFj ay1vcGVuc3RhY2stZGV2ZWxvcG1lbnQtd2l0aC1vc2EvDQo+PiA+PiA+Pg0KPj4gPj4gPj4gSSBo YXZlIENlbnRPUzcsIHdpdGggOCBDUFUgYW5kIDE2R0IgbWVtb3J5IHdpdGggMTAwR0IgZGlzayBz aXplLg0KPj4gPj4gPj4NCj4+ID4+ID4+IEVycm9yOiBodHRwOi8vcGFzdGUub3BlbnN0YWNrLm9y Zy9zaG93LzY2MDQ5Ny8NCj4+ID4+ID4+DQo+PiA+PiA+Pg0KPj4gPj4gPj4gSSBoYXZlIHRpcmVk IGdhdGUtY2hlY2stY29tbWl0LnNoIGJ1dCBzYW1lIGVycm9yIDooDQo+PiA+PiA+Pg0KPj4gPj4g Pj4NCj4+ID4+ID4+DQo+PiA+PiA+PiBPbiBTYXQsIEZlYiAzLCAyMDE4IGF0IDE6MTEgQU0sIFNh dGlzaCBQYXRlbCA8c2F0aXNoLnR4dEBnbWFpbC5jb20+DQo+PiA+PiA+PiB3cm90ZToNCj4+ID4+ ID4+PiBJIGhhdmUgc3RhcnRlZCBwbGF5aW5nIHdpdGggb3BlbnN0YWNrLWFuc2libGUgb24gQ2Vu dE9TNyBhbmQgdHJ5aW5nIHRvDQo+PiA+PiA+Pj4gaW5zdGFsbCBBbGwtaW4tb25lIGJ1dCBnb3Qg dGhpcyBlcnJvciBhbmQgbm90IHN1cmUgd2hhdCBjYXVzZSB0aGF0DQo+PiA+PiA+Pj4gZXJyb3Ig aG93IGRvIGkgdHJvdWJsZXNob290IGl0Pw0KPj4gPj4gPj4+DQo+PiA+PiA+Pj4NCj4+ID4+ID4+ PiBUQVNLIFtib290c3RyYXAtaG9zdCA6IFJlbW92ZSBhbiBleGlzdGluZyBwcml2YXRlL3B1Ymxp YyBzc2gga2V5cyBpZg0KPj4gPj4gPj4+IG9uZSBpcyBtaXNzaW5nXQ0KPj4gPj4gPj4+DQo+PiA+ PiA+Pj4gKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqDQo+PiA+PiA+Pj4gc2tpcHBpbmc6IFtsb2NhbGhvc3RdID0+ IChpdGVtPWlkX3JzYSkNCj4+ID4+ID4+PiBza2lwcGluZzogW2xvY2FsaG9zdF0gPT4gKGl0ZW09 aWRfcnNhLnB1YikNCj4+ID4+ID4+Pg0KPj4gPj4gPj4+IFRBU0sgW2Jvb3RzdHJhcC1ob3N0IDog Q3JlYXRlIHNzaCBrZXkgcGFpciBmb3Igcm9vdF0NCj4+ID4+ID4+Pg0KPj4gPj4gPj4+ICoqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqDQo+PiA+PiA+Pj4gb2s6 IFtsb2NhbGhvc3RdDQo+PiA+PiA+Pj4NCj4+ID4+ID4+PiBUQVNLIFtib290c3RyYXAtaG9zdCA6 IEZldGNoIHRoZSBnZW5lcmF0ZWQgcHVibGljIHNzaCBrZXldDQo+PiA+PiA+Pj4NCj4+ID4+ID4+ PiAqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKg0KPj4gPj4gPj4+IGNo YW5nZWQ6IFtsb2NhbGhvc3RdDQo+PiA+PiA+Pj4NCj4+ID4+ID4+PiBUQVNLIFtib290c3RyYXAt aG9zdCA6IEVuc3VyZSByb290J3MgbmV3IHB1YmxpYyBzc2gga2V5IGlzIGluDQo+PiA+PiA+Pj4g YXV0aG9yaXplZF9rZXlzXQ0KPj4gPj4gPj4+DQo+PiA+PiA+Pj4gKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqDQo+PiA+PiA+Pj4gb2s6IFtsb2NhbGhvc3RdDQo+PiA+PiA+Pj4NCj4+ID4+ID4+PiBUQVNL IFtib290c3RyYXAtaG9zdCA6IENyZWF0ZSB0aGUgcmVxdWlyZWQgZGVwbG95bWVudCBkaXJlY3Rv cmllc10NCj4+ID4+ID4+Pg0KPj4gPj4gPj4+ICoqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKg0KPj4gPj4gPj4+IGNoYW5nZWQ6IFtsb2NhbGhvc3RdID0+IChpdGVtPS9ldGMvb3BlbnN0 YWNrX2RlcGxveSkNCj4+ID4+ID4+PiBjaGFuZ2VkOiBbbG9jYWxob3N0XSA9PiAoaXRlbT0vZXRj L29wZW5zdGFja19kZXBsb3kvY29uZi5kKQ0KPj4gPj4gPj4+IGNoYW5nZWQ6IFtsb2NhbGhvc3Rd ID0+IChpdGVtPS9ldGMvb3BlbnN0YWNrX2RlcGxveS9lbnYuZCkNCj4+ID4+ID4+Pg0KPj4gPj4g Pj4+IFRBU0sgW2Jvb3RzdHJhcC1ob3N0IDogRGVwbG95IHVzZXIgY29uZi5kIGNvbmZpZ3VyYXRp b25dDQo+PiA+PiA+Pj4NCj4+ID4+ID4+PiAqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqDQo+PiA+PiA+Pj4gZmF0YWw6IFtsb2NhbGhvc3RdOiBGQUlMRUQhID0+IHsi bXNnIjogInt7DQo+PiA+PiA+Pj4gY29uZmRfb3ZlcnJpZGVzW2Jvb3RzdHJhcF9ob3N0X3NjZW5h cmlvXSB9fTogJ2RpY3Qgb2JqZWN0JyBoYXMgbm8NCj4+ID4+ID4+PiBhdHRyaWJ1dGUgdSdhaW8n In0NCj4+ID4+ID4+Pg0KPj4gPj4gPj4+IFJVTk5JTkcgSEFORExFUiBbc3NoZCA6IFJlbG9hZCB0 aGUgU1NIIHNlcnZpY2VdDQo+PiA+PiA+Pj4NCj4+ID4+ID4+PiAqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqDQo+PiA+PiA+Pj4gICAgICAgICB0byBy ZXRyeSwgdXNlOiAtLWxpbWl0DQo+PiA+PiA+Pj4gQC9vcHQvb3BlbnN0YWNrLWFuc2libGUvdGVz dHMvYm9vdHN0cmFwLWFpby5yZXRyeQ0KPj4gPj4gPj4+DQo+PiA+PiA+Pj4gUExBWSBSRUNBUA0K Pj4gPj4gPj4+ICoqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqDQo+PiA+PiA+Pj4gbG9j YWxob3N0ICAgICAgICAgICAgICAgICAgOiBvaz02MSAgIGNoYW5nZWQ9MzYgICB1bnJlYWNoYWJs ZT0wDQo+PiA+PiA+Pj4gZmFpbGVkPTINCj4+ID4+ID4+Pg0KPj4gPj4gPj4+IFtyb290QGFpbyBv cGVuc3RhY2stYW5zaWJsZV0jDQo+PiA+Pg0KPj4gPj4gX19fX19fX19fX19fX19fX19fX19fX19f X19fX19fX19fX19fX19fX19fX19fX18NCj4+ID4+IE1haWxpbmcgbGlzdDoNCj4+ID4+IGh0dHA6 Ly9saXN0cy5vcGVuc3RhY2sub3JnL2NnaS1iaW4vbWFpbG1hbi9saXN0aW5mby9vcGVuc3RhY2sN Cj4+ID4+IFBvc3QgdG8gICAgIDogb3BlbnN0YWNrQGxpc3RzLm9wZW5zdGFjay5vcmcNCj4+ID4+ IFVuc3Vic2NyaWJlIDoNCj4+ID4+IGh0dHA6Ly9saXN0cy5vcGVuc3RhY2sub3JnL2NnaS1iaW4v bWFpbG1hbi9saXN0aW5mby9vcGVuc3RhY2sNCj4+ID4NCj4+ID4NCj4gDQo+IF9fX19fX19fX19f X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fDQo+IE1haWxpbmcgbGlzdDogaHR0 cDovL2xpc3RzLm9wZW5zdGFjay5vcmcvY2dpLWJpbi9tYWlsbWFuL2xpc3RpbmZvL29wZW5zdGFj aw0KPiBQb3N0IHRvICAgICA6IG9wZW5zdGFja0BsaXN0cy5vcGVuc3RhY2sub3JnDQo+IFVuc3Vi c2NyaWJlIDogaHR0cDovL2xpc3RzLm9wZW5zdGFjay5vcmcvY2dpLWJpbi9tYWlsbWFuL2xpc3Rp bmZvL29wZW5zdGFjaw0KDQo= --=_68d84b4844d99d5e832c62a4fff924b4 Content-Transfer-Encoding: base64 Content-Type: text/html; charset=utf-8 PGh0bWw+PGhlYWQ+PG1ldGEgaHR0cC1lcXVpdj0iY29udGVudC10eXBlIiBjb250ZW50PSJ0ZXh0 L2h0bWw7IGNoYXJzZXQ9dXRmLTgiPjwvaGVhZD48Ym9keSBkaXI9ImF1dG8iPjxkaXY+PC9kaXY+ PGRpdj5JIHJlY2FsbCB0aGF0IGFuc2libGUgcHJvamVjdCB3b3JrZWQgYmV0dGVyIHdpdGggdWJ1 bnR1IGFuZCBub3Qgd2l0aCBDZW50T1MuJm5ic3A7PC9kaXY+PGRpdj48YnI+PC9kaXY+PGRpdj48 YnI+SWwgZ2lvcm5vIDA0IGZlYiAyMDE4LCBhbGxlIG9yZSAxMDoyMSwgTWFyY2luIER1bGFrICZs dDs8YSBocmVmPSJtYWlsdG86bWFyY2luLmR1bGFrQGdtYWlsLmNvbSI+bWFyY2luLmR1bGFrQGdt YWlsLmNvbTwvYT4mZ3Q7IGhhIHNjcml0dG86PGJyPjxicj48L2Rpdj48YmxvY2txdW90ZSB0eXBl PSJjaXRlIj48ZGl2PjxkaXYgZGlyPSJsdHIiPlNpbmNlIHlvdSBhbHJlYWR5IGludmVzdGVkIHNv bWUgdGltZSwgcmVwZWF0IHlvdXIgaW5zdGFsbGF0aW9uIGZyb20gc2NyYXRjaCBhbmQgc3VibWl0 IHRoZSBidWcgaWYgbmVjZXNzYXJ5LCBtYXliZSBzb21lb25lIHdpbGwgbG9vayBhdCBpdC48ZGl2 PjxkaXYgc3R5bGU9ImNvbG9yOnJnYigzNCwzNCwzNCk7Zm9udC1mYW1pbHk6YXJpYWwsc2Fucy1z ZXJpZjtmb250LXNpemU6c21hbGw7Zm9udC1zdHlsZTpub3JtYWw7Zm9udC12YXJpYW50LWxpZ2F0 dXJlczpub3JtYWw7Zm9udC12YXJpYW50LWNhcHM6bm9ybWFsO2ZvbnQtd2VpZ2h0OjQwMDtsZXR0 ZXItc3BhY2luZzpub3JtYWw7dGV4dC1hbGlnbjpzdGFydDt0ZXh0LWluZGVudDowcHg7dGV4dC10 cmFuc2Zvcm06bm9uZTt3aGl0ZS1zcGFjZTpub3JtYWw7d29yZC1zcGFjaW5nOjBweDtiYWNrZ3Jv dW5kLWNvbG9yOnJnYigyNTUsMjU1LDI1NSk7dGV4dC1kZWNvcmF0aW9uLXN0eWxlOmluaXRpYWw7 dGV4dC1kZWNvcmF0aW9uLWNvbG9yOmluaXRpYWwiPkFmdGVyIHRoYXQgY2hlY2sgdGhlIG90aGVy IGRlcGxveW1lbnQgdG9vbHMsIGxpa2UgdGhlIGp1c3QgbWVudGlvbmVkIFRyaXBsZU8uPC9kaXY+ PGRpdj48ZGl2PkknbSBub3Qgc3VyZSB3aGV0aGVyIG9wZW5zdGFjay1hbnNpYmxlIChvciBhbnkg b3RoZXIgdG9vbCkgaXMgcHJvZHVjdGlvbiByZWFkeSAtIGp1c3QgbG9vayBhdCB0aGUgdHlwZXMg b2YgYnVncyB0aG9zZSBwcm9qZWN0cyBhcmUgY3VycmVudGx5IGRlYWxpbmcgd2l0aCw8L2Rpdj48 ZGl2PmJ1dCB5b3UgbWF5IGJlIG1vcmUgbHVja3kgd2l0aCBhbiBVYnVudHUgZGVwbG95bWVudC48 L2Rpdj48ZGl2Pjxicj48L2Rpdj48ZGl2PjxkaXY+TWFyY2luPC9kaXY+PC9kaXY+PC9kaXY+PC9k aXY+PC9kaXY+PGRpdiBjbGFzcz0iZ21haWxfZXh0cmEiPjxicj48ZGl2IGNsYXNzPSJnbWFpbF9x dW90ZSI+T24gU3VuLCBGZWIgNCwgMjAxOCBhdCA0OjUzIFBNLCBTYXRpc2ggUGF0ZWwgPHNwYW4g ZGlyPSJsdHIiPiZsdDs8YSBocmVmPSJtYWlsdG86c2F0aXNoLnR4dEBnbWFpbC5jb20iIHRhcmdl dD0iX2JsYW5rIj5zYXRpc2gudHh0QGdtYWlsLmNvbTwvYT4mZ3Q7PC9zcGFuPiB3cm90ZTo8YnI+ PGJsb2NrcXVvdGUgY2xhc3M9ImdtYWlsX3F1b3RlIiBzdHlsZT0ibWFyZ2luOjAgMCAwIC44ZXg7 Ym9yZGVyLWxlZnQ6MXB4ICNjY2Mgc29saWQ7cGFkZGluZy1sZWZ0OjFleCI+SGkgTWFyY2luLDxi cj4NCjxicj4NClRoYW5rIHlvdSwgaSB3aWxsIHRyeSBvdGhlciBsaW5rLCBhbHNvIGkgYW0gdXNp bmcgQ2VudE9TNyBidXQgYW55d2F5PGJyPg0Kbm93IHF1ZXN0aW9uIGlzIGRvZXMgb3BlbnN0YWNr LWFuc2libGUgcmVhZHkgZm9yIHByb2R1Y3Rpb24gZGVwbG95bWVudDxicj4NCmRlc3BpdGUgZ2Fs ZXJhIGlzc3VlcyBhbmQgYnVnPzxicj4NCjxicj4NCklmIGkgd2FudCB0byBnbyBvbiBwcm9kdWN0 aW9uIHNob3VsZCBpIHdhaXQgb3IgZmluZCBvdGhlciB0b29scyB0bzxicj4NCmRlcGxveSBvbiBw cm9kdWN0aW9uPzxicj4NCjxkaXYgY2xhc3M9IkhPRW5aYiI+PGRpdiBjbGFzcz0iaDUiPjxicj4N Ck9uIFN1biwgRmViIDQsIDIwMTggYXQgNToyOSBBTSwgTWFyY2luIER1bGFrICZsdDs8YSBocmVm PSJtYWlsdG86bWFyY2luLmR1bGFrQGdtYWlsLmNvbSI+bWFyY2luLmR1bGFrQGdtYWlsLmNvbTwv YT4mZ3Q7IHdyb3RlOjxicj4NCiZndDsgV2hlbiBwbGF5aW5nIHdpdGggb3BlbnN0YWNrLWFuc2li bGUgZG8gaXQgaW4gYSB2aXJ0dWFsIHNldHVwIChlLmcuIG5lc3RlZDxicj4NCiZndDsgdmlydHVh bGl6YXRpb24gd2l0aCBsaWJ2aXJ0KSBzbyB5b3UgY2FuIHJlcHJvZHVjaWJseSBicmluZyB1cCB5 b3VyPGJyPg0KJmd0OyBlbnZpcm9ubWVudCBmcm9tIHNjcmF0Y2guPGJyPg0KJmd0OyBZb3Ugd2ls bCBoYXZlIHRvIGRvIGl0IG11bHRpcGxlIHRpbWVzLjxicj4NCiZndDs8YnI+DQomZ3Q7IDxhIGhy ZWY9Imh0dHBzOi8vZGV2ZWxvcGVyLnJhY2tzcGFjZS5jb20vYmxvZy9saWZlLXdpdGhvdXQtZGV2 c3RhY2stb3BlbnN0YWNrLWRldmVsb3BtZW50LXdpdGgtb3NhLyIgcmVsPSJub3JlZmVycmVyIiB0 YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly9kZXZlbG9wZXIucmFja3NwYWNlLjx3YnI+Y29tL2Jsb2cv bGlmZS13aXRob3V0LTx3YnI+ZGV2c3RhY2stb3BlbnN0YWNrLTx3YnI+ZGV2ZWxvcG1lbnQtd2l0 aC1vc2EvPC9hPjxicj4NCiZndDsgaXMgbW9yZSB0aGFuIDIgeWVhcnMgb2xkLjxicj4NCiZndDs8 YnI+DQomZ3Q7IFRyeSB0byBmb2xsb3c8YnI+DQomZ3Q7IDxhIGhyZWY9Imh0dHBzOi8vZG9jcy5v cGVuc3RhY2sub3JnL29wZW5zdGFjay1hbnNpYmxlL2xhdGVzdC9jb250cmlidXRvci9xdWlja3N0 YXJ0LWFpby5odG1sIiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL2Rv Y3Mub3BlbnN0YWNrLm9yZy88d2JyPm9wZW5zdGFjay1hbnNpYmxlL2xhdGVzdC88d2JyPmNvbnRy aWJ1dG9yL3F1aWNrc3RhcnQtYWlvLjx3YnI+aHRtbDwvYT48YnI+DQomZ3Q7IGJ1dCBnaXQgY2xv bmUgdGhlIGxhdGVzdCBzdGF0ZSBvZiB0aGUgb3BlbnN0YWNrLWFuc2libGUgcmVwby48YnI+DQom Z3Q7IFRoZSBhYm92ZSBwYWdlIGhhcyBhIGxpbmsgdGhhdCBjYW4gYmUgdXNlZCB0byBzdWJtaXQg YnVncyBkaXJlY3RseSB0byB0aGU8YnI+DQomZ3Q7IG9wZW5zdGFjay1hbnNpYmxlIHByb2plY3Qg YXQgbGF1bmNocGFkLjxicj4NCiZndDsgSW4gdGhpcyB3YXkgeW91IG1heSBiZSBhYmxlIHRvIGNs ZWFudXAvaW1wcm92ZSB0aGUgZG9jdW1lbnRhdGlvbiw8YnI+DQomZ3Q7IGFuZCBzaW5jZSB5b3Vy IHNldHVwIGlzIHRoZSBzaW1wbGVzdCBwb3NzaWJsZSBvbmUgeW91ciBidWcgcmVwb3J0cyBtYXkg Z2V0PGJyPg0KJmd0OyBub3RpY2VkIGFuZCByZXByb2R1Y2VkIGJ5IHRoZSBkZXZlbG9wZXJzLjxi cj4NCiZndDsgV2hhdCBoYXBwZW5zIGlzIHRoYXQgbW9zdCBwZW9wbGUgdHJ5IG9wZW5zdGFjay1h bnNpYmxlLCBkb24ndCByZXBvcnQgYnVncyw8YnI+DQomZ3Q7IG9yIHJlcG9ydCB0aGUgYnVncyB3 aXRob3V0IHRoZSBpbmZvcm1hdGlvbiBuZWNjZXNhcnk8YnI+DQomZ3Q7IHRvIHJlcHJvZHVjZSB0 aGVtLCBhbmQgYWJhbmRvbiB0aGUgd2hvbGUgaWRlYS48YnI+DQomZ3Q7PGJyPg0KJmd0OyBUcnkg dG8gc2VhcmNoPGJyPg0KJmd0OyA8YSBocmVmPSJodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9v cGVuc3RhY2stYW5zaWJsZS8rYnVncz9maWVsZC5zZWFyY2h0ZXh0PWdhbGVyYSIgcmVsPSJub3Jl ZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly9idWdzLmxhdW5jaHBhZC5uZXQvPHdicj5v cGVuc3RhY2stYW5zaWJsZS8rYnVncz9maWVsZC48d2JyPnNlYXJjaHRleHQ9Z2FsZXJhPC9hPjxi cj4NCiZndDsgZm9yIGluc3BpcmF0aW9uIGFib3V0IHdoYXQgdG8gZG8uPGJyPg0KJmd0OyBDdXJy ZW50bHkgdGhlIGdhbGVyYSBzZXR1cCBpbiBvcGVuc3RhY2stYW5zaWJsZSwgZXNwZWNpYWxseSBv biBjZW50b3M3IHNlZW1zPGJyPg0KJmd0OyB0byBiZSB1bmRlcmdvaW5nIHNvbWUgY3JpdGljYWwg Y2hhbmdlcy48YnI+DQomZ3Q7IEVudGVyIHRoZSBnYWxlcmEgY29udGFpbmVyOjxicj4NCiZndDsg bHhjLWF0dGFjaCAtbiBhaW8xX2dhbGVyYV9jb250YWluZXItNGY0ODhmNmE8YnI+DQomZ3Q7IGxv b2sgYXJvdW5kIGl0LCBjaGVjayB3aGV0aGVyIG15c3FsZCBpcyBydW5uaW5nIGV0Yy4sIHRyeSB0 byBpZGVudGlmeSB3aGljaDxicj4NCiZndDsgYW5zaWJsZSB0YXNrcyBmYWlsZWQgYW5kIHJ1biB0 aGVtIG1hbnVhbGx5IGluc2lkZSBvZiB0aGUgY29udGFpbmVyLjxicj4NCiZndDs8YnI+DQomZ3Q7 IE1hcmNpbjxicj4NCiZndDs8YnI+DQomZ3Q7PGJyPg0KJmd0OyBPbiBTdW4sIEZlYiA0LCAyMDE4 IGF0IDM6NDEgQU0sIFNhdGlzaCBQYXRlbCAmbHQ7PGEgaHJlZj0ibWFpbHRvOnNhdGlzaC50eHRA Z21haWwuY29tIj5zYXRpc2gudHh0QGdtYWlsLmNvbTwvYT4mZ3Q7IHdyb3RlOjxicj4NCiZndDsm Z3Q7PGJyPg0KJmd0OyZndDsgSSBoYXZlIG5vdGljZWQgaW4gb3V0cHV0ICJhaW8xX2dhbGVyYV9j b250YWluZXIiIGlzIGZhaWxlZCwgaG93IGRvIGk8YnI+DQomZ3Q7Jmd0OyBmaXhlZCB0aGlzIGtp bmQgb2YgaXNzdWU/PGJyPg0KJmd0OyZndDs8YnI+DQomZ3Q7Jmd0Ozxicj4NCiZndDsmZ3Q7PGJy Pg0KJmd0OyZndDsgUExBWSBSRUNBUDxicj4NCiZndDsmZ3Q7ICoqKioqKioqKioqKioqKioqKioq KioqKioqKioqKjx3YnI+KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqPHdicj4qKioqKioq KioqKioqKioqKioqKioqKioqKioqKio8d2JyPioqKioqKioqKioqKioqKioqKioqKioqKioqKioq Kjx3YnI+KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqPHdicj4qKioqKioqKioqKioqKioq KioqKjxicj4NCiZndDsmZ3Q7IGFpbzEmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7 ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOzogb2s9NDEm bmJzcDsgJm5ic3A7Y2hhbmdlZD00Jm5ic3A7ICZuYnNwOyB1bnJlYWNoYWJsZT0wPGJyPg0KJmd0 OyZndDsgZmFpbGVkPTA8YnI+DQomZ3Q7Jmd0OyBhaW8xX2NpbmRlcl9hcGlfY29udGFpbmVyLTx3 YnI+MmFmNGRkMDEgOiBvaz0wJm5ic3A7ICZuYnNwOyBjaGFuZ2VkPTA8YnI+DQomZ3Q7Jmd0OyB1 bnJlYWNoYWJsZT0wJm5ic3A7ICZuYnNwOyBmYWlsZWQ9MDxicj4NCiZndDsmZ3Q7IGFpbzFfY2lu ZGVyX3NjaGVkdWxlcl88d2JyPmNvbnRhaW5lci00NTRkYjFmYiA6IG9rPTAmbmJzcDsgJm5ic3A7 IGNoYW5nZWQ9MDxicj4NCiZndDsmZ3Q7IHVucmVhY2hhYmxlPTAmbmJzcDsgJm5ic3A7IGZhaWxl ZD0wPGJyPg0KJmd0OyZndDsgYWlvMV9kZXNpZ25hdGVfY29udGFpbmVyLTx3YnI+ZjdlYTNmNzMg OiBvaz0wJm5ic3A7ICZuYnNwOyBjaGFuZ2VkPTAmbmJzcDsgJm5ic3A7IHVucmVhY2hhYmxlPTA8 YnI+DQomZ3Q7Jmd0OyZuYnNwOyAmbmJzcDsgZmFpbGVkPTA8YnI+DQomZ3Q7Jmd0OyBhaW8xX2dh bGVyYV9jb250YWluZXItNGY0ODhmNmEgOiBvaz0zMiZuYnNwOyAmbmJzcDtjaGFuZ2VkPTMmbmJz cDsgJm5ic3A7IHVucmVhY2hhYmxlPTA8YnI+DQomZ3Q7Jmd0OyBmYWlsZWQ9MTxicj4NCiZndDsm Z3Q7PGJyPg0KJmd0OyZndDsgT24gU2F0LCBGZWIgMywgMjAxOCBhdCA5OjI2IFBNLCBTYXRpc2gg UGF0ZWwgJmx0OzxhIGhyZWY9Im1haWx0bzpzYXRpc2gudHh0QGdtYWlsLmNvbSI+c2F0aXNoLnR4 dEBnbWFpbC5jb208L2E+Jmd0OyB3cm90ZTo8YnI+DQomZ3Q7Jmd0OyAmZ3Q7IEkgaGF2ZSByZS1p bnN0YWxsIGNlbnRvczcgYW5kIGdpdmUgaXQgYSB0cnkgYW5kIGdvdCB0aGlzIGVycm9yPGJyPg0K Jmd0OyZndDsgJmd0Ozxicj4NCiZndDsmZ3Q7ICZndDsgREVCVUcgTUVTU0FHRSBSRUNBUDxicj4N CiZndDsmZ3Q7ICZndDsgKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqPHdicj4qKioqKioq KioqKioqKioqKioqKioqKioqKioqKio8YnI+DQomZ3Q7Jmd0OyAmZ3Q7IERFQlVHOiBbTG9hZCBs b2NhbCBwYWNrYWdlc108YnI+DQomZ3Q7Jmd0OyAmZ3Q7ICoqKioqKioqKioqKioqKioqKioqKioq KioqKioqKjx3YnI+KioqKioqKioqKioqKioqKioqKioqPGJyPg0KJmd0OyZndDsgJmd0OyBBbGwg aXRlbXMgY29tcGxldGVkPGJyPg0KJmd0OyZndDsgJmd0Ozxicj4NCiZndDsmZ3Q7ICZndDsgU2F0 dXJkYXkgMDMgRmVicnVhcnkgMjAxOCZuYnNwOyAyMTowNDowNyAtMDUwMCAoMDowMDowNC4xNzUp PGJyPg0KJmd0OyZndDsgJmd0OyAwOjE2OjE3LjIwNCAqKioqKjxicj4NCiZndDsmZ3Q7ICZndDs8 YnI+DQomZ3Q7Jmd0OyAmZ3Q7ID09PT09PT09PT09PT09PT09PT09PT09PT09PT09PTx3YnI+PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PHdicj49PT09PT09PT09PT09PT09PT09PGJyPg0K Jmd0OyZndDsgJmd0OyByZXBvX2J1aWxkIDogQ3JlYXRlIE9wZW5TdGFjay1BbnNpYmxlIHJlcXVp cmVtZW50IHdoZWVscyAtLS0tLS0tLS0tLS0tLTxicj4NCiZndDsmZ3Q7ICZndDsgMjY4LjE2czxi cj4NCiZndDsmZ3Q7ICZndDsgcmVwb19idWlsZCA6IFdhaXQgZm9yIHRoZSB2ZW52cyBidWlsZHMg dG8gY29tcGxldGUgLS0tLS0tLS0tLS0tLS0tLS0tLS08YnI+DQomZ3Q7Jmd0OyAmZ3Q7IDExMC4z MHM8YnI+DQomZ3Q7Jmd0OyAmZ3Q7IHJlcG9fYnVpbGQgOiBJbnN0YWxsIHBhY2thZ2VzIC0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLTx3YnI+LS0tLS0tLS0tLS0tPGJyPg0KJmd0OyZndDsg Jmd0OyA2OC4yNnM8YnI+DQomZ3Q7Jmd0OyAmZ3Q7IHJlcG9fYnVpbGQgOiBDbG9uZSBnaXQgcmVw b3NpdG9yaWVzIGFzeW5jaHJvbm91c2x5IC0tLS0tLS0tLS0tLS0tLS0tLS0tLTxicj4NCiZndDsm Z3Q7ICZndDsgNTkuODVzPGJyPg0KJmd0OyZndDsgJmd0OyBwaXBfaW5zdGFsbCA6IEluc3RhbGwg ZGlzdHJvIHBhY2thZ2VzIC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLTx3YnI+LS0tLTxi cj4NCiZndDsmZ3Q7ICZndDsgMzYuNzJzPGJyPg0KJmd0OyZndDsgJmd0OyBnYWxlcmFfY2xpZW50 IDogSW5zdGFsbCBnYWxlcmEgZGlzdHJvIHBhY2thZ2VzIC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS08YnI+DQomZ3Q7Jmd0OyAmZ3Q7IDMzLjIxczxicj4NCiZndDsmZ3Q7ICZndDsgaGFwcm94eV9z ZXJ2ZXIgOiBDcmVhdGUgaGFwcm94eSBzZXJ2aWNlIGNvbmZpZyBmaWxlcyAtLS0tLS0tLS0tLS0t LS0tLS0tPGJyPg0KJmd0OyZndDsgJmd0OyAzMC44MXM8YnI+DQomZ3Q7Jmd0OyAmZ3Q7IHJlcG9f YnVpbGQgOiBFeGVjdXRlIHRoZSB2ZW52IGJ1aWxkIHNjcmlwdHMgYXN5bmNob25vdXNseSAtLS0t LS0tLS0tLS0tLTxicj4NCiZndDsmZ3Q7ICZndDsgMjkuNjlzPGJyPg0KJmd0OyZndDsgJmd0OyBw aXBfaW5zdGFsbCA6IEluc3RhbGwgZGlzdHJvIHBhY2thZ2VzIC0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLTx3YnI+LS0tLTxicj4NCiZndDsmZ3Q7ICZndDsgMjMuNTZzPGJyPg0KJmd0OyZn dDsgJmd0OyByZXBvX3NlcnZlciA6IEluc3RhbGwgcmVwbyBzZXJ2ZXIgcGFja2FnZXMgLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS08YnI+DQomZ3Q7Jmd0OyAmZ3Q7IDIwLjExczxicj4NCiZn dDsmZ3Q7ICZndDsgbWVtY2FjaGVkX3NlcnZlciA6IEluc3RhbGwgZGlzdHJvIHBhY2thZ2VzIC0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tPGJyPg0KJmd0OyZndDsgJmd0OyAxNi4zNXM8YnI+ DQomZ3Q7Jmd0OyAmZ3Q7IHJlcG9fYnVpbGQgOiBDcmVhdGUgdmVudiBidWlsZCBvcHRpb25zIGZp bGVzIC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLTxicj4NCiZndDsmZ3Q7ICZndDsgMTQuNTdz PGJyPg0KJmd0OyZndDsgJmd0OyBoYXByb3h5X3NlcnZlciA6IEluc3RhbGwgSEFQcm94eSBQYWNr YWdlczxicj4NCiZndDsmZ3Q7ICZndDsgLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tPHdi cj4tIDguMzVzPGJyPg0KJmd0OyZndDsgJmd0OyByc3lzbG9nX2NsaWVudCA6IEluc3RhbGwgcnN5 c2xvZyBwYWNrYWdlczxicj4NCiZndDsmZ3Q7ICZndDsgLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tPHdicj4tIDguMzNzPGJyPg0KJmd0OyZndDsgJmd0OyByc3lzbG9nX2NsaWVudCA6IElu c3RhbGwgcnN5c2xvZyBwYWNrYWdlczxicj4NCiZndDsmZ3Q7ICZndDsgLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tPHdicj4tIDcuNjRzPGJyPg0KJmd0OyZndDsgJmd0OyByc3lzbG9nX2Ns aWVudCA6IEluc3RhbGwgcnN5c2xvZyBwYWNrYWdlczxicj4NCiZndDsmZ3Q7ICZndDsgLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tPHdicj4tIDcuNDJzPGJyPg0KJmd0OyZndDsgJmd0OyBy ZXBvX2J1aWxkIDogV2FpdCBmb3IgZ2l0IGNsb25lcyB0byBjb21wbGV0ZTxicj4NCiZndDsmZ3Q7 ICZndDsgLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLSA3LjI1czxicj4NCiZndDsmZ3Q7ICZn dDsgcmVwb19zZXJ2ZXIgOiBJbnN0YWxsIHJlcG8gY2FjaGluZyBzZXJ2ZXIgcGFja2FnZXM8YnI+ DQomZ3Q7Jmd0OyAmZ3Q7IC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0gNC43NnM8YnI+DQomZ3Q7Jmd0 OyAmZ3Q7IGdhbGVyYV9zZXJ2ZXIgOiBDaGVjayB0aGF0IFdTUkVQIGlzIHJlYWR5PGJyPg0KJmd0 OyZndDsgJmd0OyAtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS08d2JyPi0gNC4xOHM8YnI+ DQomZ3Q7Jmd0OyAmZ3Q7IHJlcG9fc2VydmVyIDogR2l0IHNlcnZpY2UgZGF0YSBmb2xkZXIgc2V0 dXA8YnI+DQomZ3Q7Jmd0OyAmZ3Q7IC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tIDQuMDRz PGJyPg0KJmd0OyZndDsgJmd0OyArKyBleGl0X2ZhaWwgMzQxIDA8YnI+DQomZ3Q7Jmd0OyAmZ3Q7 ICsrIHNldCAreDxicj4NCiZndDsmZ3Q7ICZndDsgKysgaW5mb19ibG9jayAnRXJyb3IgSW5mbyAt IDM0MScgMDxicj4NCiZndDsmZ3Q7ICZndDsgKysgZWNobzxicj4NCiZndDsmZ3Q7ICZndDsgLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tPHdicj4tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS08d2JyPi0tLS0tLS0tLS08YnI+DQomZ3Q7Jmd0OyAmZ3Q7IC0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLTx3YnI+LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tPHdicj4tLS0t LS0tLS0tPGJyPg0KJmd0OyZndDsgJmd0OyArKyBwcmludF9pbmZvICdFcnJvciBJbmZvIC0gMzQx JyAwPGJyPg0KJmd0OyZndDsgJmd0OyArKyBQUk9DX05BTUU9Jy0gWyBFcnJvciBJbmZvIC0gMzQx IDAgXSAtJzxicj4NCiZndDsmZ3Q7ICZndDsgKysgcHJpbnRmICdcbiVzJXNcbicgJy0gWyBFcnJv ciBJbmZvIC0gMzQxIDAgXSAtJzxicj4NCiZndDsmZ3Q7ICZndDsgLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tPHdicj4tLS0tLS0tLS0tLS0tLTxicj4NCiZndDsmZ3Q7ICZndDs8YnI+DQom Z3Q7Jmd0OyAmZ3Q7IC0gWyBFcnJvciBJbmZvIC0gMzQxIDAgXSAtLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS08d2JyPi0tLS0tLS0tLS0tLS0tLTxicj4NCiZndDsmZ3Q7ICZndDsgKysgZWNo bzxicj4NCiZndDsmZ3Q7ICZndDsgLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tPHdicj4t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS08d2JyPi0tLS0tLS0tLS08YnI+DQomZ3Q7Jmd0 OyAmZ3Q7IC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLTx3YnI+LS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tPHdicj4tLS0tLS0tLS0tPGJyPg0KJmd0OyZndDsgJmd0OyArKyBleGl0 X3N0YXRlIDE8YnI+DQomZ3Q7Jmd0OyAmZ3Q7ICsrIHNldCAreDxicj4NCiZndDsmZ3Q7ICZndDsg LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tPHdicj4tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS08d2JyPi0tLS0tLS0tLS08YnI+DQomZ3Q7Jmd0OyAmZ3Q7PGJyPg0KJmd0OyZndDsg Jmd0OyAtIFsgUnVuIFRpbWUgPSAyMDMwIHNlY29uZHMgfHwgMzMgbWludXRlcyBdIC0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tPGJyPg0KJmd0OyZndDsgJmd0OyAtLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS08d2JyPi0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLTx3YnI+LS0tLS0t LS0tLTxicj4NCiZndDsmZ3Q7ICZndDsgLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tPHdi cj4tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS08d2JyPi0tLS0tLS0tLS08YnI+DQomZ3Q7 Jmd0OyAmZ3Q7PGJyPg0KJmd0OyZndDsgJmd0OyAtIFsgU3RhdHVzOiBGYWlsdXJlIF0gLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tPHdicj4tLS0tLS0tLS0tLS0tLS0tLS08YnI+DQomZ3Q7 Jmd0OyAmZ3Q7IC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLTx3YnI+LS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tPHdicj4tLS0tLS0tLS0tPGJyPg0KJmd0OyZndDsgJmd0Ozxicj4N CiZndDsmZ3Q7ICZndDs8YnI+DQomZ3Q7Jmd0OyAmZ3Q7PGJyPg0KJmd0OyZndDsgJmd0Ozxicj4N CiZndDsmZ3Q7ICZndDsgSSBkb24ndCBrbm93IHdoeSBpdCBmYWlsZWQ8YnI+DQomZ3Q7Jmd0OyAm Z3Q7PGJyPg0KJmd0OyZndDsgJmd0OyBidXQgaSB0cmllZCBmb2xsb3dpbmc6PGJyPg0KJmd0OyZn dDsgJmd0Ozxicj4NCiZndDsmZ3Q7ICZndDsgW3Jvb3RAYWlvIH5dIyBseGMtbHMgLWY8YnI+DQom Z3Q7Jmd0OyAmZ3Q7IE5BTUUmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNw OyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7 ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7IFNUQVRFJm5ic3A7ICZu YnNwO0FVVE9TVEFSVCBHUk9VUFM8YnI+DQomZ3Q7Jmd0OyAmZ3Q7Jm5ic3A7ICZuYnNwOyAmbmJz cDsgJm5ic3A7ICZuYnNwOyBJUFY0Jm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAm bmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZu YnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5i c3A7SVBWNjxicj4NCiZndDsmZ3Q7ICZndDsgYWlvMV9jaW5kZXJfYXBpX2NvbnRhaW5lci08d2Jy PjJhZjRkZDAxJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyBSVU5OSU5HIDEmbmJz cDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7b25ib290LDxicj4NCiZndDsmZ3Q7ICZndDsg b3BlbnN0YWNrIDEwLjI1NS4yNTUuNjIsIDE3Mi4yOS4yMzguMjEwLCAxNzIuMjkuMjQ0LjE1MiZu YnNwOyAtPGJyPg0KJmd0OyZndDsgJmd0OyBhaW8xX2NpbmRlcl9zY2hlZHVsZXJfPHdicj5jb250 YWluZXItNDU0ZGIxZmImbmJzcDsgJm5ic3A7IFJVTk5JTkcgMSZuYnNwOyAmbmJzcDsgJm5ic3A7 ICZuYnNwOyAmbmJzcDtvbmJvb3QsPGJyPg0KJmd0OyZndDsgJmd0OyBvcGVuc3RhY2sgMTAuMjU1 LjI1NS4xMTcsIDE3Mi4yOS4yMzkuMTcyJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNw OyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDstPGJyPg0KJmd0OyZndDsgJmd0OyBhaW8xX2Rl c2lnbmF0ZV9jb250YWluZXItPHdicj5mN2VhM2Y3MyZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNw OyAmbmJzcDsgJm5ic3A7UlVOTklORyAxJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNw O29uYm9vdCw8YnI+DQomZ3Q7Jmd0OyAmZ3Q7IG9wZW5zdGFjayAxMC4yNTUuMjU1LjIzNSwgMTcy LjI5LjIzOS4xNjYmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJz cDsgJm5ic3A7ICZuYnNwOy08YnI+DQomZ3Q7Jmd0OyAmZ3Q7IGFpbzFfZ2FsZXJhX2NvbnRhaW5l ci00ZjQ4OGY2YSZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNw OyBSVU5OSU5HIDEmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7b25ib290LDxicj4N CiZndDsmZ3Q7ICZndDsgb3BlbnN0YWNrIDEwLjI1NS4yNTUuMTkzLCAxNzIuMjkuMjM2LjY5Jm5i c3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJz cDsgLTxicj4NCiZndDsmZ3Q7ICZndDsgYWlvMV9nbGFuY2VfY29udGFpbmVyLWY4Y2FhOWU2Jm5i c3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7IFJVTk5JTkcgMSZu YnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDtvbmJvb3QsPGJyPg0KJmd0OyZndDsgJmd0 OyBvcGVuc3RhY2sgMTAuMjU1LjI1NS4yMjUsIDE3Mi4yOS4yMzkuNTIsIDE3Mi4yOS4yNDYuMjUm bmJzcDsgJm5ic3A7LTxicj4NCiZndDsmZ3Q7ICZndDsgYWlvMV9oZWF0X2FwaV9jb250YWluZXIt PHdicj44MzIxYTc2MyZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7IFJV Tk5JTkcgMSZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDtvbmJvb3QsPGJyPg0KJmd0 OyZndDsgJmd0OyBvcGVuc3RhY2sgMTAuMjU1LjI1NS4xMDQsIDE3Mi4yOS4yMzYuMTg2Jm5ic3A7 ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDst PGJyPg0KJmd0OyZndDsgJmd0OyBhaW8xX2hlYXRfYXBpc19jb250YWluZXItPHdicj4zZjcwYWQ3 NCZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7UlVOTklORyAxJm5ic3A7 ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwO29uYm9vdCw8YnI+DQomZ3Q7Jmd0OyAmZ3Q7IG9w ZW5zdGFjayAxMC4yNTUuMjU1LjE2NiwgMTcyLjI5LjIzOS4xMyZuYnNwOyAmbmJzcDsgJm5ic3A7 ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7IC08YnI+DQomZ3Q7Jmd0 OyAmZ3Q7IGFpbzFfaGVhdF9lbmdpbmVfY29udGFpbmVyLTx3YnI+YTE4ZTVhMGEmbmJzcDsgJm5i c3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7UlVOTklORyAxJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5i c3A7ICZuYnNwO29uYm9vdCw8YnI+DQomZ3Q7Jmd0OyAmZ3Q7IG9wZW5zdGFjayAxMC4yNTUuMjU1 LjExOCwgMTcyLjI5LjIzOC43Jm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJz cDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7LTxicj4NCiZndDsmZ3Q7ICZndDsgYWlvMV9o b3Jpem9uX2NvbnRhaW5lci08d2JyPmU0OTMyNzVjJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7 ICZuYnNwOyAmbmJzcDsgJm5ic3A7UlVOTklORyAxJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7 ICZuYnNwO29uYm9vdCw8YnI+DQomZ3Q7Jmd0OyAmZ3Q7IG9wZW5zdGFjayAxMC4yNTUuMjU1Ljk4 LCAxNzIuMjkuMjM3LjQzJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsg Jm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7LTxicj4NCiZndDsmZ3Q7ICZndDsgYWlvMV9rZXlz dG9uZV9jb250YWluZXItPHdicj5jMGUyM2UxNCZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAm bmJzcDsgJm5ic3A7IFJVTk5JTkcgMSZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDtv bmJvb3QsPGJyPg0KJmd0OyZndDsgJmd0OyBvcGVuc3RhY2sgMTAuMjU1LjI1NS42MCwgMTcyLjI5 LjIzNy4xNjUmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsg Jm5ic3A7ICZuYnNwOyAtPGJyPg0KJmd0OyZndDsgJmd0OyBhaW8xX21lbWNhY2hlZF9jb250YWlu ZXItPHdicj5lZjhmZWQ0YyZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7 UlVOTklORyAxJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwO29uYm9vdCw8YnI+DQom Z3Q7Jmd0OyAmZ3Q7IG9wZW5zdGFjayAxMC4yNTUuMjU1LjIxNCwgMTcyLjI5LjIzOC4yMTEmbmJz cDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNw Oy08YnI+DQomZ3Q7Jmd0OyAmZ3Q7IGFpbzFfbmV1dHJvbl9hZ2VudHNfY29udGFpbmVyLTx3YnI+ MTMxZTk5NmUmbmJzcDsgJm5ic3A7ICZuYnNwOyBSVU5OSU5HIDEmbmJzcDsgJm5ic3A7ICZuYnNw OyAmbmJzcDsgJm5ic3A7b25ib290LDxicj4NCiZndDsmZ3Q7ICZndDsgb3BlbnN0YWNrIDEwLjI1 NS4yNTUuMTUzLCAxNzIuMjkuMjM3LjI0NiwgMTcyLjI5LjI0My4yMjcgLTxicj4NCiZndDsmZ3Q7 ICZndDsgYWlvMV9uZXV0cm9uX3NlcnZlcl9jb250YWluZXItPHdicj5jY2Q2OTM5NCZuYnNwOyAm bmJzcDsgJm5ic3A7IFJVTk5JTkcgMSZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDtv bmJvb3QsPGJyPg0KJmd0OyZndDsgJmd0OyBvcGVuc3RhY2sgMTAuMjU1LjI1NS4yNywgMTcyLjI5 LjIzNi4xMjkmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsg Jm5ic3A7ICZuYnNwOyAtPGJyPg0KJmd0OyZndDsgJmd0OyBhaW8xX25vdmFfYXBpX2NvbnRhaW5l ci08d2JyPjczMjc0MDI0Jm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsg UlVOTklORyAxJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwO29uYm9vdCw8YnI+DQom Z3Q7Jmd0OyAmZ3Q7IG9wZW5zdGFjayAxMC4yNTUuMjU1LjQyLCAxNzIuMjkuMjM4LjIwMSZuYnNw OyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7 IC08YnI+DQomZ3Q7Jmd0OyAmZ3Q7IGFpbzFfbm92YV9hcGlfbWV0YWRhdGFfPHdicj5jb250YWlu ZXItYTFkMzIyODImbmJzcDsgJm5ic3A7UlVOTklORyAxJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5i c3A7ICZuYnNwO29uYm9vdCw8YnI+DQomZ3Q7Jmd0OyAmZ3Q7IG9wZW5zdGFjayAxMC4yNTUuMjU1 LjIxOCwgMTcyLjI5LjIzOC4xNTMmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZu YnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOy08YnI+DQomZ3Q7Jmd0OyAmZ3Q7IGFpbzFfbm92YV9h cGlfb3NfY29tcHV0ZV88d2JyPmNvbnRhaW5lci01MjcyNTk0MCBSVU5OSU5HIDEmbmJzcDsgJm5i c3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7b25ib290LDxicj4NCiZndDsmZ3Q7ICZndDsgb3BlbnN0 YWNrIDEwLjI1NS4yNTUuMTA5LCAxNzIuMjkuMjM2LjEyNiZuYnNwOyAmbmJzcDsgJm5ic3A7ICZu YnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7LTxicj4NCiZndDsmZ3Q7ICZn dDsgYWlvMV9ub3ZhX2FwaV9wbGFjZW1lbnRfPHdicj5jb250YWluZXItMDU4ZTgwMzEmbmJzcDsg UlVOTklORyAxJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwO29uYm9vdCw8YnI+DQom Z3Q7Jmd0OyAmZ3Q7IG9wZW5zdGFjayAxMC4yNTUuMjU1LjI5LCAxNzIuMjkuMjM2LjE1NyZuYnNw OyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7 IC08YnI+DQomZ3Q7Jmd0OyAmZ3Q7IGFpbzFfbm92YV9jb25kdWN0b3JfY29udGFpbmVyLTx3YnI+ OWI2YjIwOGMmbmJzcDsgJm5ic3A7ICZuYnNwOyBSVU5OSU5HIDEmbmJzcDsgJm5ic3A7ICZuYnNw OyAmbmJzcDsgJm5ic3A7b25ib290LDxicj4NCiZndDsmZ3Q7ICZndDsgb3BlbnN0YWNrIDEwLjI1 NS4yNTUuMTgsIDE3Mi4yOS4yMzkuOSZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsg Jm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAtPGJyPg0KJmd0OyZndDsgJmd0OyBh aW8xX25vdmFfY29uc29sZV9jb250YWluZXItPHdicj4wZmI4OTk1YyZuYnNwOyAmbmJzcDsgJm5i c3A7ICZuYnNwOyBSVU5OSU5HIDEmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7b25i b290LDxicj4NCiZndDsmZ3Q7ICZndDsgb3BlbnN0YWNrIDEwLjI1NS4yNTUuNDcsIDE3Mi4yOS4y MzcuMTI5Jm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZu YnNwOyAmbmJzcDsgLTxicj4NCiZndDsmZ3Q7ICZndDsgYWlvMV9ub3ZhX3NjaGVkdWxlcl9jb250 YWluZXItPHdicj44ZjdhNjU3YSZuYnNwOyAmbmJzcDsgJm5ic3A7IFJVTk5JTkcgMSZuYnNwOyAm bmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDtvbmJvb3QsPGJyPg0KJmd0OyZndDsgJmd0OyBvcGVu c3RhY2sgMTAuMjU1LjI1NS4xOTUsIDE3Mi4yOS4yMzguMTEzJm5ic3A7ICZuYnNwOyAmbmJzcDsg Jm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDstPGJyPg0KJmd0OyZndDsg Jmd0OyBhaW8xX3JhYmJpdF9tcV9jb250YWluZXItPHdicj5jMzQ1MGQ2NiZuYnNwOyAmbmJzcDsg Jm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7UlVOTklORyAxJm5ic3A7ICZuYnNwOyAmbmJzcDsg Jm5ic3A7ICZuYnNwO29uYm9vdCw8YnI+DQomZ3Q7Jmd0OyAmZ3Q7IG9wZW5zdGFjayAxMC4yNTUu MjU1LjExMSwgMTcyLjI5LjIzNy4yMDImbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7 ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOy08YnI+DQomZ3Q7Jmd0OyAmZ3Q7IGFpbzFfcmVw b19jb250YWluZXItOGUwN2ZkZWYmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZu YnNwOyAmbmJzcDsgJm5ic3A7IFJVTk5JTkcgMSZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAm bmJzcDtvbmJvb3QsPGJyPg0KJmd0OyZndDsgJmd0OyBvcGVuc3RhY2sgMTAuMjU1LjI1NS4xNDEs IDE3Mi4yOS4yMzkuNzkmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAm bmJzcDsgJm5ic3A7ICZuYnNwOyAtPGJyPg0KJmd0OyZndDsgJmd0OyBhaW8xX3JzeXNsb2dfY29u dGFpbmVyLTx3YnI+YjE5OGZiZTUmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZu YnNwOyAmbmJzcDtSVU5OSU5HIDEmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7b25i b290LDxicj4NCiZndDsmZ3Q7ICZndDsgb3BlbnN0YWNrIDEwLjI1NS4yNTUuMTMsIDE3Mi4yOS4y MzYuMTk1Jm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZu YnNwOyAmbmJzcDsgLTxicj4NCiZndDsmZ3Q7ICZndDsgYWlvMV9zd2lmdF9wcm94eV9jb250YWlu ZXItPHdicj4xYTM1MzZlMSZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDtSVU5OSU5H IDEmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7b25ib290LDxicj4NCiZndDsmZ3Q7 ICZndDsgb3BlbnN0YWNrIDEwLjI1NS4yNTUuMTA4LCAxNzIuMjkuMjM3LjMxLCAxNzIuMjkuMjQ0 LjI0OCZuYnNwOyAtPGJyPg0KJmd0OyZndDsgJmd0OyBhaW8xX3V0aWxpdHlfY29udGFpbmVyLTx3 YnI+YmQxMDZmMTEmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJz cDtSVU5OSU5HIDEmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7b25ib290LDxicj4N CiZndDsmZ3Q7ICZndDsgb3BlbnN0YWNrIDEwLjI1NS4yNTUuNTQsIDE3Mi4yOS4yMzkuMTI0Jm5i c3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJz cDsgLTxicj4NCiZndDsmZ3Q7ICZndDsgW3Jvb3RAYWlvIH5dIyBseGMtYTxicj4NCiZndDsmZ3Q7 ICZndDsgbHhjLWF0dGFjaCZuYnNwOyAmbmJzcDsgJm5ic3A7bHhjLWF1dG9zdGFydDxicj4NCiZn dDsmZ3Q7ICZndDsgW3Jvb3RAYWlvIH5dIyBseGMtYXR0YWNoIC1uIGFpbzFfdXRpbGl0eV9jb250 YWluZXItPHdicj5iZDEwNmYxMTxicj4NCiZndDsmZ3Q7ICZndDsgW3Jvb3RAYWlvMS11dGlsaXR5 LWNvbnRhaW5lci08d2JyPmJkMTA2ZjExIH5dIzxicj4NCiZndDsmZ3Q7ICZndDsgW3Jvb3RAYWlv MS11dGlsaXR5LWNvbnRhaW5lci08d2JyPmJkMTA2ZjExIH5dIyBzb3VyY2UgL3Jvb3Qvb3BlbnJj PGJyPg0KJmd0OyZndDsgJmd0OyBbcm9vdEBhaW8xLXV0aWxpdHktY29udGFpbmVyLTx3YnI+YmQx MDZmMTEgfl0jIG9wZW5zdGFjazxicj4NCiZndDsmZ3Q7ICZndDsgb3BlbnN0YWNrJm5ic3A7ICZu YnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5i c3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7b3BlbnN0YWNrLWhvc3QtaG9zdGZpbGUtc2V0dXAuPHdi cj5zaDxicj4NCiZndDsmZ3Q7ICZndDsgW3Jvb3RAYWlvMS11dGlsaXR5LWNvbnRhaW5lci08d2Jy PmJkMTA2ZjExIH5dIyBvcGVuc3RhY2s8YnI+DQomZ3Q7Jmd0OyAmZ3Q7IG9wZW5zdGFjayZuYnNw OyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7 ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwO29wZW5zdGFjay1ob3N0LWhvc3RmaWxlLXNldHVw Ljx3YnI+c2g8YnI+DQomZ3Q7Jmd0OyAmZ3Q7IFtyb290QGFpbzEtdXRpbGl0eS1jb250YWluZXIt PHdicj5iZDEwNmYxMSB+XSMgb3BlbnN0YWNrIHVzZXIgbGlzdDxicj4NCiZndDsmZ3Q7ICZndDsg RmFpbGVkIHRvIGRpc2NvdmVyIGF2YWlsYWJsZSBpZGVudGl0eSB2ZXJzaW9ucyB3aGVuIGNvbnRh Y3Rpbmc8YnI+DQomZ3Q7Jmd0OyAmZ3Q7IDxhIGhyZWY9Imh0dHA6Ly8xNzIuMjkuMjM2LjEwMDo1 MDAwL3YzIiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwOi8vMTcyLjI5LjIz Ni4xMDA6NTAwMC92MzwvYT4uIEF0dGVtcHRpbmcgdG8gcGFyc2UgdmVyc2lvbiBmcm9tIFVSTC48 YnI+DQomZ3Q7Jmd0OyAmZ3Q7IFNlcnZpY2UgVW5hdmFpbGFibGUgKEhUVFAgNTAzKTxicj4NCiZn dDsmZ3Q7ICZndDsgW3Jvb3RAYWlvMS11dGlsaXR5LWNvbnRhaW5lci08d2JyPmJkMTA2ZjExIH5d Izxicj4NCiZndDsmZ3Q7ICZndDs8YnI+DQomZ3Q7Jmd0OyAmZ3Q7PGJyPg0KJmd0OyZndDsgJmd0 OyBub3Qgc3VyZSB3aGF0IGlzIHRoaXMgZXJyb3IgPzxicj4NCiZndDsmZ3Q7ICZndDs8YnI+DQom Z3Q7Jmd0OyAmZ3Q7PGJyPg0KJmd0OyZndDsgJmd0OyBPbiBTYXQsIEZlYiAzLCAyMDE4IGF0IDc6 MjkgUE0sIFNhdGlzaCBQYXRlbCAmbHQ7PGEgaHJlZj0ibWFpbHRvOnNhdGlzaC50eHRAZ21haWwu Y29tIj5zYXRpc2gudHh0QGdtYWlsLmNvbTwvYT4mZ3Q7PGJyPg0KJmd0OyZndDsgJmd0OyB3cm90 ZTo8YnI+DQomZ3Q7Jmd0OyAmZ3Q7Jmd0OyBJIGhhdmUgdGlyZWQgZXZlcnl0aGluZyBidXQgZGlk bid0IGFibGUgdG8gZmluZCBzb2x1dGlvbiA6KCZuYnNwOyB3aGF0IGkgYW08YnI+DQomZ3Q7Jmd0 OyAmZ3Q7Jmd0OyBkb2luZyB3cm9uZyBoZXJlLCBpIGFtIGZvbGxvd2luZyB0aGlzIGluc3RydWN0 aW9uIGFuZCBwbGVhc2UgbGV0IG1lPGJyPg0KJmd0OyZndDsgJmd0OyZndDsga25vdyBpZiBpIGFt IHdyb25nPGJyPg0KJmd0OyZndDsgJmd0OyZndDs8YnI+DQomZ3Q7Jmd0OyAmZ3Q7Jmd0Ozxicj4N CiZndDsmZ3Q7ICZndDsmZ3Q7IDxhIGhyZWY9Imh0dHBzOi8vZGV2ZWxvcGVyLnJhY2tzcGFjZS5j b20vYmxvZy9saWZlLXdpdGhvdXQtZGV2c3RhY2stb3BlbnN0YWNrLWRldmVsb3BtZW50LXdpdGgt b3NhLyIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly9kZXZlbG9wZXIu cmFja3NwYWNlLjx3YnI+Y29tL2Jsb2cvbGlmZS13aXRob3V0LTx3YnI+ZGV2c3RhY2stb3BlbnN0 YWNrLTx3YnI+ZGV2ZWxvcG1lbnQtd2l0aC1vc2EvPC9hPjxicj4NCiZndDsmZ3Q7ICZndDsmZ3Q7 PGJyPg0KJmd0OyZndDsgJmd0OyZndDsgSSBoYXZlIENlbnRPUzcsIHdpdGggOCBDUFUgYW5kIDE2 R0IgbWVtb3J5IHdpdGggMTAwR0IgZGlzayBzaXplLjxicj4NCiZndDsmZ3Q7ICZndDsmZ3Q7PGJy Pg0KJmd0OyZndDsgJmd0OyZndDsgRXJyb3I6IDxhIGhyZWY9Imh0dHA6Ly9wYXN0ZS5vcGVuc3Rh Y2sub3JnL3Nob3cvNjYwNDk3LyIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0 cDovL3Bhc3RlLm9wZW5zdGFjay5vcmcvPHdicj5zaG93LzY2MDQ5Ny88L2E+PGJyPg0KJmd0OyZn dDsgJmd0OyZndDs8YnI+DQomZ3Q7Jmd0OyAmZ3Q7Jmd0Ozxicj4NCiZndDsmZ3Q7ICZndDsmZ3Q7 IEkgaGF2ZSB0aXJlZCBnYXRlLWNoZWNrLWNvbW1pdC5zaCBidXQgc2FtZSBlcnJvciA6KDxicj4N CiZndDsmZ3Q7ICZndDsmZ3Q7PGJyPg0KJmd0OyZndDsgJmd0OyZndDs8YnI+DQomZ3Q7Jmd0OyAm Z3Q7Jmd0Ozxicj4NCiZndDsmZ3Q7ICZndDsmZ3Q7IE9uIFNhdCwgRmViIDMsIDIwMTggYXQgMTox MSBBTSwgU2F0aXNoIFBhdGVsICZsdDs8YSBocmVmPSJtYWlsdG86c2F0aXNoLnR4dEBnbWFpbC5j b20iPnNhdGlzaC50eHRAZ21haWwuY29tPC9hPiZndDs8YnI+DQomZ3Q7Jmd0OyAmZ3Q7Jmd0OyB3 cm90ZTo8YnI+DQomZ3Q7Jmd0OyAmZ3Q7Jmd0OyZndDsgSSBoYXZlIHN0YXJ0ZWQgcGxheWluZyB3 aXRoIG9wZW5zdGFjay1hbnNpYmxlIG9uIENlbnRPUzcgYW5kIHRyeWluZyB0bzxicj4NCiZndDsm Z3Q7ICZndDsmZ3Q7Jmd0OyBpbnN0YWxsIEFsbC1pbi1vbmUgYnV0IGdvdCB0aGlzIGVycm9yIGFu ZCBub3Qgc3VyZSB3aGF0IGNhdXNlIHRoYXQ8YnI+DQomZ3Q7Jmd0OyAmZ3Q7Jmd0OyZndDsgZXJy b3IgaG93IGRvIGkgdHJvdWJsZXNob290IGl0Pzxicj4NCiZndDsmZ3Q7ICZndDsmZ3Q7Jmd0Ozxi cj4NCiZndDsmZ3Q7ICZndDsmZ3Q7Jmd0Ozxicj4NCiZndDsmZ3Q7ICZndDsmZ3Q7Jmd0OyBUQVNL IFtib290c3RyYXAtaG9zdCA6IFJlbW92ZSBhbiBleGlzdGluZyBwcml2YXRlL3B1YmxpYyBzc2gg a2V5cyBpZjxicj4NCiZndDsmZ3Q7ICZndDsmZ3Q7Jmd0OyBvbmUgaXMgbWlzc2luZ108YnI+DQom Z3Q7Jmd0OyAmZ3Q7Jmd0OyZndDs8YnI+DQomZ3Q7Jmd0OyAmZ3Q7Jmd0OyZndDsgKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqPHdicj4qKioqKioqKioqKioqKioqKioqKioqKioqKioqKio8 d2JyPioqKioqKioqKioqKjxicj4NCiZndDsmZ3Q7ICZndDsmZ3Q7Jmd0OyBza2lwcGluZzogW2xv Y2FsaG9zdF0gPSZndDsgKGl0ZW09aWRfcnNhKTxicj4NCiZndDsmZ3Q7ICZndDsmZ3Q7Jmd0OyBz a2lwcGluZzogW2xvY2FsaG9zdF0gPSZndDsgKGl0ZW09aWRfcnNhLnB1Yik8YnI+DQomZ3Q7Jmd0 OyAmZ3Q7Jmd0OyZndDs8YnI+DQomZ3Q7Jmd0OyAmZ3Q7Jmd0OyZndDsgVEFTSyBbYm9vdHN0cmFw LWhvc3QgOiBDcmVhdGUgc3NoIGtleSBwYWlyIGZvciByb290XTxicj4NCiZndDsmZ3Q7ICZndDsm Z3Q7Jmd0Ozxicj4NCiZndDsmZ3Q7ICZndDsmZ3Q7Jmd0OyAqKioqKioqKioqKioqKioqKioqKioq KioqKioqKio8d2JyPioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKjx3YnI+KioqKioqKioq KioqKioqKioqKioqKioqKioqKioqPHdicj4qKioqKioqKioqKioqKjxicj4NCiZndDsmZ3Q7ICZn dDsmZ3Q7Jmd0OyBvazogW2xvY2FsaG9zdF08YnI+DQomZ3Q7Jmd0OyAmZ3Q7Jmd0OyZndDs8YnI+ DQomZ3Q7Jmd0OyAmZ3Q7Jmd0OyZndDsgVEFTSyBbYm9vdHN0cmFwLWhvc3QgOiBGZXRjaCB0aGUg Z2VuZXJhdGVkIHB1YmxpYyBzc2gga2V5XTxicj4NCiZndDsmZ3Q7ICZndDsmZ3Q7Jmd0Ozxicj4N CiZndDsmZ3Q7ICZndDsmZ3Q7Jmd0OyAqKioqKioqKioqKioqKioqKioqKioqKioqKioqKio8d2Jy PioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKjx3YnI+KioqKioqKioqKioqKioqKioqKioq KioqKioqKioqPHdicj4qKioqKioqKjxicj4NCiZndDsmZ3Q7ICZndDsmZ3Q7Jmd0OyBjaGFuZ2Vk OiBbbG9jYWxob3N0XTxicj4NCiZndDsmZ3Q7ICZndDsmZ3Q7Jmd0Ozxicj4NCiZndDsmZ3Q7ICZn dDsmZ3Q7Jmd0OyBUQVNLIFtib290c3RyYXAtaG9zdCA6IEVuc3VyZSByb290J3MgbmV3IHB1Ymxp YyBzc2gga2V5IGlzIGluPGJyPg0KJmd0OyZndDsgJmd0OyZndDsmZ3Q7IGF1dGhvcml6ZWRfa2V5 c108YnI+DQomZ3Q7Jmd0OyAmZ3Q7Jmd0OyZndDs8YnI+DQomZ3Q7Jmd0OyAmZ3Q7Jmd0OyZndDsg KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqPHdicj4qKioqKioqKioqKioqKioqKioqKioq KioqKioqKio8d2JyPioqKioqKioqKioqKioqKioqKjxicj4NCiZndDsmZ3Q7ICZndDsmZ3Q7Jmd0 OyBvazogW2xvY2FsaG9zdF08YnI+DQomZ3Q7Jmd0OyAmZ3Q7Jmd0OyZndDs8YnI+DQomZ3Q7Jmd0 OyAmZ3Q7Jmd0OyZndDsgVEFTSyBbYm9vdHN0cmFwLWhvc3QgOiBDcmVhdGUgdGhlIHJlcXVpcmVk IGRlcGxveW1lbnQgZGlyZWN0b3JpZXNdPGJyPg0KJmd0OyZndDsgJmd0OyZndDsmZ3Q7PGJyPg0K Jmd0OyZndDsgJmd0OyZndDsmZ3Q7ICoqKioqKioqKioqKioqKioqKioqKioqKioqKioqKjx3YnI+ KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqPHdicj4qKioqKioqKioqKioqKioqKioqKioq KioqKioqKio8YnI+DQomZ3Q7Jmd0OyAmZ3Q7Jmd0OyZndDsgY2hhbmdlZDogW2xvY2FsaG9zdF0g PSZndDsgKGl0ZW09L2V0Yy9vcGVuc3RhY2tfZGVwbG95KTxicj4NCiZndDsmZ3Q7ICZndDsmZ3Q7 Jmd0OyBjaGFuZ2VkOiBbbG9jYWxob3N0XSA9Jmd0OyAoaXRlbT0vZXRjL29wZW5zdGFja19kZXBs b3kvPHdicj5jb25mLmQpPGJyPg0KJmd0OyZndDsgJmd0OyZndDsmZ3Q7IGNoYW5nZWQ6IFtsb2Nh bGhvc3RdID0mZ3Q7IChpdGVtPS9ldGMvb3BlbnN0YWNrX2RlcGxveS88d2JyPmVudi5kKTxicj4N CiZndDsmZ3Q7ICZndDsmZ3Q7Jmd0Ozxicj4NCiZndDsmZ3Q7ICZndDsmZ3Q7Jmd0OyBUQVNLIFti b290c3RyYXAtaG9zdCA6IERlcGxveSB1c2VyIGNvbmYuZCBjb25maWd1cmF0aW9uXTxicj4NCiZn dDsmZ3Q7ICZndDsmZ3Q7Jmd0Ozxicj4NCiZndDsmZ3Q7ICZndDsmZ3Q7Jmd0OyAqKioqKioqKioq KioqKioqKioqKioqKioqKioqKio8d2JyPioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKjx3 YnI+KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqPHdicj4qKioqKioqKioqPGJyPg0KJmd0 OyZndDsgJmd0OyZndDsmZ3Q7IGZhdGFsOiBbbG9jYWxob3N0XTogRkFJTEVEISA9Jmd0OyB7Im1z ZyI6ICJ7ezxicj4NCiZndDsmZ3Q7ICZndDsmZ3Q7Jmd0OyBjb25mZF9vdmVycmlkZXNbYm9vdHN0 cmFwXzx3YnI+aG9zdF9zY2VuYXJpb10gfX06ICdkaWN0IG9iamVjdCcgaGFzIG5vPGJyPg0KJmd0 OyZndDsgJmd0OyZndDsmZ3Q7IGF0dHJpYnV0ZSB1J2FpbycifTxicj4NCiZndDsmZ3Q7ICZndDsm Z3Q7Jmd0Ozxicj4NCiZndDsmZ3Q7ICZndDsmZ3Q7Jmd0OyBSVU5OSU5HIEhBTkRMRVIgW3NzaGQg OiBSZWxvYWQgdGhlIFNTSCBzZXJ2aWNlXTxicj4NCiZndDsmZ3Q7ICZndDsmZ3Q7Jmd0Ozxicj4N CiZndDsmZ3Q7ICZndDsmZ3Q7Jmd0OyAqKioqKioqKioqKioqKioqKioqKioqKioqKioqKio8d2Jy PioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKjx3YnI+KioqKioqKioqKioqKioqKioqKioq KioqKioqKioqPHdicj4qKioqKioqKioqKioqKioqKioqPGJyPg0KJmd0OyZndDsgJmd0OyZndDsm Z3Q7Jm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwO3RvIHJldHJ5LCB1c2U6IC0tbGlt aXQ8YnI+DQomZ3Q7Jmd0OyAmZ3Q7Jmd0OyZndDsgQC9vcHQvb3BlbnN0YWNrLWFuc2libGUvdGVz dHMvPHdicj5ib290c3RyYXAtYWlvLnJldHJ5PGJyPg0KJmd0OyZndDsgJmd0OyZndDsmZ3Q7PGJy Pg0KJmd0OyZndDsgJmd0OyZndDsmZ3Q7IFBMQVkgUkVDQVA8YnI+DQomZ3Q7Jmd0OyAmZ3Q7Jmd0 OyZndDsgKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqPHdicj4qKioqKioqKioqKioqKioq KioqKioqKioqKioqKio8d2JyPioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKjx3YnI+Kioq KioqKioqKioqKioqKioqKioqKioqKioqKioqPHdicj4qKioqKioqKioqKioqKioqKioqKioqKioq Kjxicj4NCiZndDsmZ3Q7ICZndDsmZ3Q7Jmd0OyBsb2NhbGhvc3QmbmJzcDsgJm5ic3A7ICZuYnNw OyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyA6IG9rPTYxJm5ic3A7 ICZuYnNwO2NoYW5nZWQ9MzYmbmJzcDsgJm5ic3A7dW5yZWFjaGFibGU9MDxicj4NCiZndDsmZ3Q7 ICZndDsmZ3Q7Jmd0OyBmYWlsZWQ9Mjxicj4NCiZndDsmZ3Q7ICZndDsmZ3Q7Jmd0Ozxicj4NCiZn dDsmZ3Q7ICZndDsmZ3Q7Jmd0OyBbcm9vdEBhaW8gb3BlbnN0YWNrLWFuc2libGVdIzxicj4NCiZn dDsmZ3Q7PGJyPg0KJmd0OyZndDsgX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fPHdicj5f X19fX19fX19fX19fX19fXzxicj4NCiZndDsmZ3Q7IE1haWxpbmcgbGlzdDo8YnI+DQomZ3Q7Jmd0 OyA8YSBocmVmPSJodHRwOi8vbGlzdHMub3BlbnN0YWNrLm9yZy9jZ2ktYmluL21haWxtYW4vbGlz dGluZm8vb3BlbnN0YWNrIiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwOi8v bGlzdHMub3BlbnN0YWNrLm9yZy88d2JyPmNnaS1iaW4vbWFpbG1hbi9saXN0aW5mby88d2JyPm9w ZW5zdGFjazwvYT48YnI+DQomZ3Q7Jmd0OyBQb3N0IHRvJm5ic3A7ICZuYnNwOyAmbmJzcDs6IDxh IGhyZWY9Im1haWx0bzpvcGVuc3RhY2tAbGlzdHMub3BlbnN0YWNrLm9yZyI+b3BlbnN0YWNrQGxp c3RzLm9wZW5zdGFjay5vcmc8L2E+PGJyPg0KJmd0OyZndDsgVW5zdWJzY3JpYmUgOjxicj4NCiZn dDsmZ3Q7IDxhIGhyZWY9Imh0dHA6Ly9saXN0cy5vcGVuc3RhY2sub3JnL2NnaS1iaW4vbWFpbG1h bi9saXN0aW5mby9vcGVuc3RhY2siIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0 dHA6Ly9saXN0cy5vcGVuc3RhY2sub3JnLzx3YnI+Y2dpLWJpbi9tYWlsbWFuL2xpc3RpbmZvLzx3 YnI+b3BlbnN0YWNrPC9hPjxicj4NCiZndDs8YnI+DQomZ3Q7PGJyPg0KPC9kaXY+PC9kaXY+PC9i bG9ja3F1b3RlPjwvZGl2Pjxicj48L2Rpdj4NCg0KPC9kaXY+PC9ibG9ja3F1b3RlPjxibG9ja3F1 b3RlIHR5cGU9ImNpdGUiPjxkaXY+PHNwYW4+X19fX19fX19fX19fX19fX19fX19fX19fX19fX19f X19fX19fX19fX19fX19fX188L3NwYW4+PGJyPjxzcGFuPk1haWxpbmcgbGlzdDogPGEgaHJlZj0i aHR0cDovL2xpc3RzLm9wZW5zdGFjay5vcmcvY2dpLWJpbi9tYWlsbWFuL2xpc3RpbmZvL29wZW5z dGFjayI+aHR0cDovL2xpc3RzLm9wZW5zdGFjay5vcmcvY2dpLWJpbi9tYWlsbWFuL2xpc3RpbmZv L29wZW5zdGFjazwvYT48L3NwYW4+PGJyPjxzcGFuPlBvc3QgdG8gJm5ic3A7Jm5ic3A7Jm5ic3A7 Jm5ic3A7OiA8YSBocmVmPSJtYWlsdG86b3BlbnN0YWNrQGxpc3RzLm9wZW5zdGFjay5vcmciPm9w ZW5zdGFja0BsaXN0cy5vcGVuc3RhY2sub3JnPC9hPjwvc3Bhbj48YnI+PHNwYW4+VW5zdWJzY3Jp YmUgOiA8YSBocmVmPSJodHRwOi8vbGlzdHMub3BlbnN0YWNrLm9yZy9jZ2ktYmluL21haWxtYW4v bGlzdGluZm8vb3BlbnN0YWNrIj5odHRwOi8vbGlzdHMub3BlbnN0YWNrLm9yZy9jZ2ktYmluL21h aWxtYW4vbGlzdGluZm8vb3BlbnN0YWNrPC9hPjwvc3Bhbj48YnI+PC9kaXY+PC9ibG9ja3F1b3Rl PjwvYm9keT48L2h0bWw+ --=_68d84b4844d99d5e832c62a4fff924b4-- From satish.txt at gmail.com Sun Feb 4 19:00:50 2018 From: satish.txt at gmail.com (Satish Patel) Date: Sun, 4 Feb 2018 14:00:50 -0500 Subject: [Openstack] openstack-ansible aio error In-Reply-To: <2D5C1208-BA0A-491D-B232-6B0AB3EBC8CF@italy1.com> References: <2D5C1208-BA0A-491D-B232-6B0AB3EBC8CF@italy1.com> Message-ID: Just wondering why did you say we can't do HA with TripleO? I thought it does support HA. am i missing something here? On Sun, Feb 4, 2018 at 11:21 AM, wrote: > What are you looking for ha? Etc. Tripleo is the way to go for that packstack if you want simple deployment but no ha of course. > >> Il giorno 04 feb 2018, alle ore 07:53, Satish Patel ha scritto: >> >> Hi Marcin, >> >> Thank you, i will try other link, also i am using CentOS7 but anyway >> now question is does openstack-ansible ready for production deployment >> despite galera issues and bug? >> >> If i want to go on production should i wait or find other tools to >> deploy on production? >> >>> On Sun, Feb 4, 2018 at 5:29 AM, Marcin Dulak wrote: >>> When playing with openstack-ansible do it in a virtual setup (e.g. nested >>> virtualization with libvirt) so you can reproducibly bring up your >>> environment from scratch. >>> You will have to do it multiple times. >>> >>> https://developer.rackspace.com/blog/life-without-devstack-openstack-development-with-osa/ >>> is more than 2 years old. >>> >>> Try to follow >>> https://docs.openstack.org/openstack-ansible/latest/contributor/quickstart-aio.html >>> but git clone the latest state of the openstack-ansible repo. >>> The above page has a link that can be used to submit bugs directly to the >>> openstack-ansible project at launchpad. >>> In this way you may be able to cleanup/improve the documentation, >>> and since your setup is the simplest possible one your bug reports may get >>> noticed and reproduced by the developers. >>> What happens is that most people try openstack-ansible, don't report bugs, >>> or report the bugs without the information neccesary >>> to reproduce them, and abandon the whole idea. >>> >>> Try to search >>> https://bugs.launchpad.net/openstack-ansible/+bugs?field.searchtext=galera >>> for inspiration about what to do. >>> Currently the galera setup in openstack-ansible, especially on centos7 seems >>> to be undergoing some critical changes. >>> Enter the galera container: >>> lxc-attach -n aio1_galera_container-4f488f6a >>> look around it, check whether mysqld is running etc., try to identify which >>> ansible tasks failed and run them manually inside of the container. >>> >>> Marcin >>> >>> >>>> On Sun, Feb 4, 2018 at 3:41 AM, Satish Patel wrote: >>>> >>>> I have noticed in output "aio1_galera_container" is failed, how do i >>>> fixed this kind of issue? >>>> >>>> >>>> >>>> PLAY RECAP >>>> ************************************************************************************************************************************************************************** >>>> aio1 : ok=41 changed=4 unreachable=0 >>>> failed=0 >>>> aio1_cinder_api_container-2af4dd01 : ok=0 changed=0 >>>> unreachable=0 failed=0 >>>> aio1_cinder_scheduler_container-454db1fb : ok=0 changed=0 >>>> unreachable=0 failed=0 >>>> aio1_designate_container-f7ea3f73 : ok=0 changed=0 unreachable=0 >>>> failed=0 >>>> aio1_galera_container-4f488f6a : ok=32 changed=3 unreachable=0 >>>> failed=1 >>>> >>>>> On Sat, Feb 3, 2018 at 9:26 PM, Satish Patel wrote: >>>>> I have re-install centos7 and give it a try and got this error >>>>> >>>>> DEBUG MESSAGE RECAP >>>>> ************************************************************ >>>>> DEBUG: [Load local packages] >>>>> *************************************************** >>>>> All items completed >>>>> >>>>> Saturday 03 February 2018 21:04:07 -0500 (0:00:04.175) >>>>> 0:16:17.204 ***** >>>>> >>>>> =============================================================================== >>>>> repo_build : Create OpenStack-Ansible requirement wheels -------------- >>>>> 268.16s >>>>> repo_build : Wait for the venvs builds to complete -------------------- >>>>> 110.30s >>>>> repo_build : Install packages ------------------------------------------ >>>>> 68.26s >>>>> repo_build : Clone git repositories asynchronously --------------------- >>>>> 59.85s >>>>> pip_install : Install distro packages ---------------------------------- >>>>> 36.72s >>>>> galera_client : Install galera distro packages ------------------------- >>>>> 33.21s >>>>> haproxy_server : Create haproxy service config files ------------------- >>>>> 30.81s >>>>> repo_build : Execute the venv build scripts asynchonously -------------- >>>>> 29.69s >>>>> pip_install : Install distro packages ---------------------------------- >>>>> 23.56s >>>>> repo_server : Install repo server packages ----------------------------- >>>>> 20.11s >>>>> memcached_server : Install distro packages ----------------------------- >>>>> 16.35s >>>>> repo_build : Create venv build options files --------------------------- >>>>> 14.57s >>>>> haproxy_server : Install HAProxy Packages >>>>> ------------------------------- 8.35s >>>>> rsyslog_client : Install rsyslog packages >>>>> ------------------------------- 8.33s >>>>> rsyslog_client : Install rsyslog packages >>>>> ------------------------------- 7.64s >>>>> rsyslog_client : Install rsyslog packages >>>>> ------------------------------- 7.42s >>>>> repo_build : Wait for git clones to complete >>>>> ---------------------------- 7.25s >>>>> repo_server : Install repo caching server packages >>>>> ---------------------- 4.76s >>>>> galera_server : Check that WSREP is ready >>>>> ------------------------------- 4.18s >>>>> repo_server : Git service data folder setup >>>>> ----------------------------- 4.04s >>>>> ++ exit_fail 341 0 >>>>> ++ set +x >>>>> ++ info_block 'Error Info - 341' 0 >>>>> ++ echo >>>>> ---------------------------------------------------------------------- >>>>> ---------------------------------------------------------------------- >>>>> ++ print_info 'Error Info - 341' 0 >>>>> ++ PROC_NAME='- [ Error Info - 341 0 ] -' >>>>> ++ printf '\n%s%s\n' '- [ Error Info - 341 0 ] -' >>>>> -------------------------------------------- >>>>> >>>>> - [ Error Info - 341 0 ] --------------------------------------------- >>>>> ++ echo >>>>> ---------------------------------------------------------------------- >>>>> ---------------------------------------------------------------------- >>>>> ++ exit_state 1 >>>>> ++ set +x >>>>> ---------------------------------------------------------------------- >>>>> >>>>> - [ Run Time = 2030 seconds || 33 minutes ] -------------------------- >>>>> ---------------------------------------------------------------------- >>>>> ---------------------------------------------------------------------- >>>>> >>>>> - [ Status: Failure ] ------------------------------------------------ >>>>> ---------------------------------------------------------------------- >>>>> >>>>> >>>>> >>>>> >>>>> I don't know why it failed >>>>> >>>>> but i tried following: >>>>> >>>>> [root at aio ~]# lxc-ls -f >>>>> NAME STATE AUTOSTART GROUPS >>>>> IPV4 IPV6 >>>>> aio1_cinder_api_container-2af4dd01 RUNNING 1 onboot, >>>>> openstack 10.255.255.62, 172.29.238.210, 172.29.244.152 - >>>>> aio1_cinder_scheduler_container-454db1fb RUNNING 1 onboot, >>>>> openstack 10.255.255.117, 172.29.239.172 - >>>>> aio1_designate_container-f7ea3f73 RUNNING 1 onboot, >>>>> openstack 10.255.255.235, 172.29.239.166 - >>>>> aio1_galera_container-4f488f6a RUNNING 1 onboot, >>>>> openstack 10.255.255.193, 172.29.236.69 - >>>>> aio1_glance_container-f8caa9e6 RUNNING 1 onboot, >>>>> openstack 10.255.255.225, 172.29.239.52, 172.29.246.25 - >>>>> aio1_heat_api_container-8321a763 RUNNING 1 onboot, >>>>> openstack 10.255.255.104, 172.29.236.186 - >>>>> aio1_heat_apis_container-3f70ad74 RUNNING 1 onboot, >>>>> openstack 10.255.255.166, 172.29.239.13 - >>>>> aio1_heat_engine_container-a18e5a0a RUNNING 1 onboot, >>>>> openstack 10.255.255.118, 172.29.238.7 - >>>>> aio1_horizon_container-e493275c RUNNING 1 onboot, >>>>> openstack 10.255.255.98, 172.29.237.43 - >>>>> aio1_keystone_container-c0e23e14 RUNNING 1 onboot, >>>>> openstack 10.255.255.60, 172.29.237.165 - >>>>> aio1_memcached_container-ef8fed4c RUNNING 1 onboot, >>>>> openstack 10.255.255.214, 172.29.238.211 - >>>>> aio1_neutron_agents_container-131e996e RUNNING 1 onboot, >>>>> openstack 10.255.255.153, 172.29.237.246, 172.29.243.227 - >>>>> aio1_neutron_server_container-ccd69394 RUNNING 1 onboot, >>>>> openstack 10.255.255.27, 172.29.236.129 - >>>>> aio1_nova_api_container-73274024 RUNNING 1 onboot, >>>>> openstack 10.255.255.42, 172.29.238.201 - >>>>> aio1_nova_api_metadata_container-a1d32282 RUNNING 1 onboot, >>>>> openstack 10.255.255.218, 172.29.238.153 - >>>>> aio1_nova_api_os_compute_container-52725940 RUNNING 1 onboot, >>>>> openstack 10.255.255.109, 172.29.236.126 - >>>>> aio1_nova_api_placement_container-058e8031 RUNNING 1 onboot, >>>>> openstack 10.255.255.29, 172.29.236.157 - >>>>> aio1_nova_conductor_container-9b6b208c RUNNING 1 onboot, >>>>> openstack 10.255.255.18, 172.29.239.9 - >>>>> aio1_nova_console_container-0fb8995c RUNNING 1 onboot, >>>>> openstack 10.255.255.47, 172.29.237.129 - >>>>> aio1_nova_scheduler_container-8f7a657a RUNNING 1 onboot, >>>>> openstack 10.255.255.195, 172.29.238.113 - >>>>> aio1_rabbit_mq_container-c3450d66 RUNNING 1 onboot, >>>>> openstack 10.255.255.111, 172.29.237.202 - >>>>> aio1_repo_container-8e07fdef RUNNING 1 onboot, >>>>> openstack 10.255.255.141, 172.29.239.79 - >>>>> aio1_rsyslog_container-b198fbe5 RUNNING 1 onboot, >>>>> openstack 10.255.255.13, 172.29.236.195 - >>>>> aio1_swift_proxy_container-1a3536e1 RUNNING 1 onboot, >>>>> openstack 10.255.255.108, 172.29.237.31, 172.29.244.248 - >>>>> aio1_utility_container-bd106f11 RUNNING 1 onboot, >>>>> openstack 10.255.255.54, 172.29.239.124 - >>>>> [root at aio ~]# lxc-a >>>>> lxc-attach lxc-autostart >>>>> [root at aio ~]# lxc-attach -n aio1_utility_container-bd106f11 >>>>> [root at aio1-utility-container-bd106f11 ~]# >>>>> [root at aio1-utility-container-bd106f11 ~]# source /root/openrc >>>>> [root at aio1-utility-container-bd106f11 ~]# openstack >>>>> openstack openstack-host-hostfile-setup.sh >>>>> [root at aio1-utility-container-bd106f11 ~]# openstack >>>>> openstack openstack-host-hostfile-setup.sh >>>>> [root at aio1-utility-container-bd106f11 ~]# openstack user list >>>>> Failed to discover available identity versions when contacting >>>>> http://172.29.236.100:5000/v3. Attempting to parse version from URL. >>>>> Service Unavailable (HTTP 503) >>>>> [root at aio1-utility-container-bd106f11 ~]# >>>>> >>>>> >>>>> not sure what is this error ? >>>>> >>>>> >>>>> On Sat, Feb 3, 2018 at 7:29 PM, Satish Patel >>>>> wrote: >>>>>> I have tired everything but didn't able to find solution :( what i am >>>>>> doing wrong here, i am following this instruction and please let me >>>>>> know if i am wrong >>>>>> >>>>>> >>>>>> https://developer.rackspace.com/blog/life-without-devstack-openstack-development-with-osa/ >>>>>> >>>>>> I have CentOS7, with 8 CPU and 16GB memory with 100GB disk size. >>>>>> >>>>>> Error: http://paste.openstack.org/show/660497/ >>>>>> >>>>>> >>>>>> I have tired gate-check-commit.sh but same error :( >>>>>> >>>>>> >>>>>> >>>>>> On Sat, Feb 3, 2018 at 1:11 AM, Satish Patel >>>>>> wrote: >>>>>>> I have started playing with openstack-ansible on CentOS7 and trying to >>>>>>> install All-in-one but got this error and not sure what cause that >>>>>>> error how do i troubleshoot it? >>>>>>> >>>>>>> >>>>>>> TASK [bootstrap-host : Remove an existing private/public ssh keys if >>>>>>> one is missing] >>>>>>> >>>>>>> ************************************************************************ >>>>>>> skipping: [localhost] => (item=id_rsa) >>>>>>> skipping: [localhost] => (item=id_rsa.pub) >>>>>>> >>>>>>> TASK [bootstrap-host : Create ssh key pair for root] >>>>>>> >>>>>>> ******************************************************************************************************** >>>>>>> ok: [localhost] >>>>>>> >>>>>>> TASK [bootstrap-host : Fetch the generated public ssh key] >>>>>>> >>>>>>> ************************************************************************************************** >>>>>>> changed: [localhost] >>>>>>> >>>>>>> TASK [bootstrap-host : Ensure root's new public ssh key is in >>>>>>> authorized_keys] >>>>>>> >>>>>>> ****************************************************************************** >>>>>>> ok: [localhost] >>>>>>> >>>>>>> TASK [bootstrap-host : Create the required deployment directories] >>>>>>> >>>>>>> ****************************************************************************************** >>>>>>> changed: [localhost] => (item=/etc/openstack_deploy) >>>>>>> changed: [localhost] => (item=/etc/openstack_deploy/conf.d) >>>>>>> changed: [localhost] => (item=/etc/openstack_deploy/env.d) >>>>>>> >>>>>>> TASK [bootstrap-host : Deploy user conf.d configuration] >>>>>>> >>>>>>> **************************************************************************************************** >>>>>>> fatal: [localhost]: FAILED! => {"msg": "{{ >>>>>>> confd_overrides[bootstrap_host_scenario] }}: 'dict object' has no >>>>>>> attribute u'aio'"} >>>>>>> >>>>>>> RUNNING HANDLER [sshd : Reload the SSH service] >>>>>>> >>>>>>> ************************************************************************************************************* >>>>>>> to retry, use: --limit >>>>>>> @/opt/openstack-ansible/tests/bootstrap-aio.retry >>>>>>> >>>>>>> PLAY RECAP >>>>>>> ************************************************************************************************************************************************** >>>>>>> localhost : ok=61 changed=36 unreachable=0 >>>>>>> failed=2 >>>>>>> >>>>>>> [root at aio openstack-ansible]# >>>> >>>> _______________________________________________ >>>> Mailing list: >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >>>> Post to : openstack at lists.openstack.org >>>> Unsubscribe : >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >>> >>> >> >> _______________________________________________ >> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >> Post to : openstack at lists.openstack.org >> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack From remo at italy1.com Sun Feb 4 19:05:08 2018 From: remo at italy1.com (remo at italy1.com) Date: Sun, 4 Feb 2018 11:05:08 -0800 Subject: [Openstack] openstack-ansible aio error In-Reply-To: References: <2D5C1208-BA0A-491D-B232-6B0AB3EBC8CF@italy1.com> Message-ID: <1C807435-61CA-4A09-AD82-C6F3B91CEC03@italy1.com> Content-Type: multipart/alternative; boundary="=_6a49388620fb8de39e6597971edffd31" --=_6a49388620fb8de39e6597971edffd31 Content-Transfer-Encoding: base64 Content-Type: text/plain; charset=utf-8 VHJpcGxlbyA9IGhhDQpQYWNrc3RhY2sgPSBubyBoYQ0KDQo+IElsIGdpb3JubyAwNCBmZWIgMjAx OCwgYWxsZSBvcmUgMTE6MDAsIFNhdGlzaCBQYXRlbCA8c2F0aXNoLnR4dEBnbWFpbC5jb20+IGhh IHNjcml0dG86DQo+IA0KPiBKdXN0IHdvbmRlcmluZyB3aHkgZGlkIHlvdSBzYXkgd2UgY2FuJ3Qg ZG8gSEEgd2l0aCBUcmlwbGVPPyAgSSB0aG91Z2h0DQo+IGl0IGRvZXMgc3VwcG9ydCBIQS4gYW0g aSBtaXNzaW5nIHNvbWV0aGluZyBoZXJlPw0KPiANCj4+IE9uIFN1biwgRmViIDQsIDIwMTggYXQg MTE6MjEgQU0sICA8cmVtb0BpdGFseTEuY29tPiB3cm90ZToNCj4+IFdoYXQgYXJlIHlvdSBsb29r aW5nIGZvciBoYT8gRXRjLiBUcmlwbGVvIGlzIHRoZSB3YXkgdG8gZ28gZm9yIHRoYXQgcGFja3N0 YWNrIGlmIHlvdSB3YW50IHNpbXBsZSBkZXBsb3ltZW50IGJ1dCBubyBoYSBvZiBjb3Vyc2UuDQo+ PiANCj4+PiBJbCBnaW9ybm8gMDQgZmViIDIwMTgsIGFsbGUgb3JlIDA3OjUzLCBTYXRpc2ggUGF0 ZWwgPHNhdGlzaC50eHRAZ21haWwuY29tPiBoYSBzY3JpdHRvOg0KPj4+IA0KPj4+IEhpIE1hcmNp biwNCj4+PiANCj4+PiBUaGFuayB5b3UsIGkgd2lsbCB0cnkgb3RoZXIgbGluaywgYWxzbyBpIGFt IHVzaW5nIENlbnRPUzcgYnV0IGFueXdheQ0KPj4+IG5vdyBxdWVzdGlvbiBpcyBkb2VzIG9wZW5z dGFjay1hbnNpYmxlIHJlYWR5IGZvciBwcm9kdWN0aW9uIGRlcGxveW1lbnQNCj4+PiBkZXNwaXRl IGdhbGVyYSBpc3N1ZXMgYW5kIGJ1Zz8NCj4+PiANCj4+PiBJZiBpIHdhbnQgdG8gZ28gb24gcHJv ZHVjdGlvbiBzaG91bGQgaSB3YWl0IG9yIGZpbmQgb3RoZXIgdG9vbHMgdG8NCj4+PiBkZXBsb3kg b24gcHJvZHVjdGlvbj8NCj4+PiANCj4+Pj4gT24gU3VuLCBGZWIgNCwgMjAxOCBhdCA1OjI5IEFN LCBNYXJjaW4gRHVsYWsgPG1hcmNpbi5kdWxha0BnbWFpbC5jb20+IHdyb3RlOg0KPj4+PiBXaGVu IHBsYXlpbmcgd2l0aCBvcGVuc3RhY2stYW5zaWJsZSBkbyBpdCBpbiBhIHZpcnR1YWwgc2V0dXAg KGUuZy4gbmVzdGVkDQo+Pj4+IHZpcnR1YWxpemF0aW9uIHdpdGggbGlidmlydCkgc28geW91IGNh biByZXByb2R1Y2libHkgYnJpbmcgdXAgeW91cg0KPj4+PiBlbnZpcm9ubWVudCBmcm9tIHNjcmF0 Y2guDQo+Pj4+IFlvdSB3aWxsIGhhdmUgdG8gZG8gaXQgbXVsdGlwbGUgdGltZXMuDQo+Pj4+IA0K Pj4+PiBodHRwczovL2RldmVsb3Blci5yYWNrc3BhY2UuY29tL2Jsb2cvbGlmZS13aXRob3V0LWRl dnN0YWNrLW9wZW5zdGFjay1kZXZlbG9wbWVudC13aXRoLW9zYS8NCj4+Pj4gaXMgbW9yZSB0aGFu IDIgeWVhcnMgb2xkLg0KPj4+PiANCj4+Pj4gVHJ5IHRvIGZvbGxvdw0KPj4+PiBodHRwczovL2Rv Y3Mub3BlbnN0YWNrLm9yZy9vcGVuc3RhY2stYW5zaWJsZS9sYXRlc3QvY29udHJpYnV0b3IvcXVp Y2tzdGFydC1haW8uaHRtbA0KPj4+PiBidXQgZ2l0IGNsb25lIHRoZSBsYXRlc3Qgc3RhdGUgb2Yg dGhlIG9wZW5zdGFjay1hbnNpYmxlIHJlcG8uDQo+Pj4+IFRoZSBhYm92ZSBwYWdlIGhhcyBhIGxp bmsgdGhhdCBjYW4gYmUgdXNlZCB0byBzdWJtaXQgYnVncyBkaXJlY3RseSB0byB0aGUNCj4+Pj4g b3BlbnN0YWNrLWFuc2libGUgcHJvamVjdCBhdCBsYXVuY2hwYWQuDQo+Pj4+IEluIHRoaXMgd2F5 IHlvdSBtYXkgYmUgYWJsZSB0byBjbGVhbnVwL2ltcHJvdmUgdGhlIGRvY3VtZW50YXRpb24sDQo+ Pj4+IGFuZCBzaW5jZSB5b3VyIHNldHVwIGlzIHRoZSBzaW1wbGVzdCBwb3NzaWJsZSBvbmUgeW91 ciBidWcgcmVwb3J0cyBtYXkgZ2V0DQo+Pj4+IG5vdGljZWQgYW5kIHJlcHJvZHVjZWQgYnkgdGhl IGRldmVsb3BlcnMuDQo+Pj4+IFdoYXQgaGFwcGVucyBpcyB0aGF0IG1vc3QgcGVvcGxlIHRyeSBv cGVuc3RhY2stYW5zaWJsZSwgZG9uJ3QgcmVwb3J0IGJ1Z3MsDQo+Pj4+IG9yIHJlcG9ydCB0aGUg YnVncyB3aXRob3V0IHRoZSBpbmZvcm1hdGlvbiBuZWNjZXNhcnkNCj4+Pj4gdG8gcmVwcm9kdWNl IHRoZW0sIGFuZCBhYmFuZG9uIHRoZSB3aG9sZSBpZGVhLg0KPj4+PiANCj4+Pj4gVHJ5IHRvIHNl YXJjaA0KPj4+PiBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9vcGVuc3RhY2stYW5zaWJsZS8r YnVncz9maWVsZC5zZWFyY2h0ZXh0PWdhbGVyYQ0KPj4+PiBmb3IgaW5zcGlyYXRpb24gYWJvdXQg d2hhdCB0byBkby4NCj4+Pj4gQ3VycmVudGx5IHRoZSBnYWxlcmEgc2V0dXAgaW4gb3BlbnN0YWNr LWFuc2libGUsIGVzcGVjaWFsbHkgb24gY2VudG9zNyBzZWVtcw0KPj4+PiB0byBiZSB1bmRlcmdv aW5nIHNvbWUgY3JpdGljYWwgY2hhbmdlcy4NCj4+Pj4gRW50ZXIgdGhlIGdhbGVyYSBjb250YWlu ZXI6DQo+Pj4+IGx4Yy1hdHRhY2ggLW4gYWlvMV9nYWxlcmFfY29udGFpbmVyLTRmNDg4ZjZhDQo+ Pj4+IGxvb2sgYXJvdW5kIGl0LCBjaGVjayB3aGV0aGVyIG15c3FsZCBpcyBydW5uaW5nIGV0Yy4s IHRyeSB0byBpZGVudGlmeSB3aGljaA0KPj4+PiBhbnNpYmxlIHRhc2tzIGZhaWxlZCBhbmQgcnVu IHRoZW0gbWFudWFsbHkgaW5zaWRlIG9mIHRoZSBjb250YWluZXIuDQo+Pj4+IA0KPj4+PiBNYXJj aW4NCj4+Pj4gDQo+Pj4+IA0KPj4+Pj4gT24gU3VuLCBGZWIgNCwgMjAxOCBhdCAzOjQxIEFNLCBT YXRpc2ggUGF0ZWwgPHNhdGlzaC50eHRAZ21haWwuY29tPiB3cm90ZToNCj4+Pj4+IA0KPj4+Pj4g SSBoYXZlIG5vdGljZWQgaW4gb3V0cHV0ICJhaW8xX2dhbGVyYV9jb250YWluZXIiIGlzIGZhaWxl ZCwgaG93IGRvIGkNCj4+Pj4+IGZpeGVkIHRoaXMga2luZCBvZiBpc3N1ZT8NCj4+Pj4+IA0KPj4+ Pj4gDQo+Pj4+PiANCj4+Pj4+IFBMQVkgUkVDQVANCj4+Pj4+ICoqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqDQo+Pj4+PiBhaW8xICAgICAgICAg ICAgICAgICAgICAgICA6IG9rPTQxICAgY2hhbmdlZD00ICAgIHVucmVhY2hhYmxlPTANCj4+Pj4+ IGZhaWxlZD0wDQo+Pj4+PiBhaW8xX2NpbmRlcl9hcGlfY29udGFpbmVyLTJhZjRkZDAxIDogb2s9 MCAgICBjaGFuZ2VkPTANCj4+Pj4+IHVucmVhY2hhYmxlPTAgICAgZmFpbGVkPTANCj4+Pj4+IGFp bzFfY2luZGVyX3NjaGVkdWxlcl9jb250YWluZXItNDU0ZGIxZmIgOiBvaz0wICAgIGNoYW5nZWQ9 MA0KPj4+Pj4gdW5yZWFjaGFibGU9MCAgICBmYWlsZWQ9MA0KPj4+Pj4gYWlvMV9kZXNpZ25hdGVf Y29udGFpbmVyLWY3ZWEzZjczIDogb2s9MCAgICBjaGFuZ2VkPTAgICAgdW5yZWFjaGFibGU9MA0K Pj4+Pj4gIGZhaWxlZD0wDQo+Pj4+PiBhaW8xX2dhbGVyYV9jb250YWluZXItNGY0ODhmNmEgOiBv az0zMiAgIGNoYW5nZWQ9MyAgICB1bnJlYWNoYWJsZT0wDQo+Pj4+PiBmYWlsZWQ9MQ0KPj4+Pj4g DQo+Pj4+Pj4gT24gU2F0LCBGZWIgMywgMjAxOCBhdCA5OjI2IFBNLCBTYXRpc2ggUGF0ZWwgPHNh dGlzaC50eHRAZ21haWwuY29tPiB3cm90ZToNCj4+Pj4+PiBJIGhhdmUgcmUtaW5zdGFsbCBjZW50 b3M3IGFuZCBnaXZlIGl0IGEgdHJ5IGFuZCBnb3QgdGhpcyBlcnJvcg0KPj4+Pj4+IA0KPj4+Pj4+ IERFQlVHIE1FU1NBR0UgUkVDQVANCj4+Pj4+PiAqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioNCj4+Pj4+PiBERUJVRzogW0xvYWQgbG9j YWwgcGFja2FnZXNdDQo+Pj4+Pj4gKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqKioqDQo+Pj4+Pj4gQWxsIGl0ZW1zIGNvbXBsZXRlZA0KPj4+Pj4+IA0KPj4+ Pj4+IFNhdHVyZGF5IDAzIEZlYnJ1YXJ5IDIwMTggIDIxOjA0OjA3IC0wNTAwICgwOjAwOjA0LjE3 NSkNCj4+Pj4+PiAwOjE2OjE3LjIwNCAqKioqKg0KPj4+Pj4+IA0KPj4+Pj4+ID09PT09PT09PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT09PT0NCj4+Pj4+PiByZXBvX2J1aWxkIDogQ3JlYXRlIE9wZW5TdGFjay1BbnNpYmxl IHJlcXVpcmVtZW50IHdoZWVscyAtLS0tLS0tLS0tLS0tLQ0KPj4+Pj4+IDI2OC4xNnMNCj4+Pj4+ PiByZXBvX2J1aWxkIDogV2FpdCBmb3IgdGhlIHZlbnZzIGJ1aWxkcyB0byBjb21wbGV0ZSAtLS0t LS0tLS0tLS0tLS0tLS0tLQ0KPj4+Pj4+IDExMC4zMHMNCj4+Pj4+PiByZXBvX2J1aWxkIDogSW5z dGFsbCBwYWNrYWdlcyAtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0N Cj4+Pj4+PiA2OC4yNnMNCj4+Pj4+PiByZXBvX2J1aWxkIDogQ2xvbmUgZ2l0IHJlcG9zaXRvcmll cyBhc3luY2hyb25vdXNseSAtLS0tLS0tLS0tLS0tLS0tLS0tLS0NCj4+Pj4+PiA1OS44NXMNCj4+ Pj4+PiBwaXBfaW5zdGFsbCA6IEluc3RhbGwgZGlzdHJvIHBhY2thZ2VzIC0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0NCj4+Pj4+PiAzNi43MnMNCj4+Pj4+PiBnYWxlcmFfY2xpZW50 IDogSW5zdGFsbCBnYWxlcmEgZGlzdHJvIHBhY2thZ2VzIC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0NCj4+Pj4+PiAzMy4yMXMNCj4+Pj4+PiBoYXByb3h5X3NlcnZlciA6IENyZWF0ZSBoYXByb3h5 IHNlcnZpY2UgY29uZmlnIGZpbGVzIC0tLS0tLS0tLS0tLS0tLS0tLS0NCj4+Pj4+PiAzMC44MXMN Cj4+Pj4+PiByZXBvX2J1aWxkIDogRXhlY3V0ZSB0aGUgdmVudiBidWlsZCBzY3JpcHRzIGFzeW5j aG9ub3VzbHkgLS0tLS0tLS0tLS0tLS0NCj4+Pj4+PiAyOS42OXMNCj4+Pj4+PiBwaXBfaW5zdGFs bCA6IEluc3RhbGwgZGlzdHJvIHBhY2thZ2VzIC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0NCj4+Pj4+PiAyMy41NnMNCj4+Pj4+PiByZXBvX3NlcnZlciA6IEluc3RhbGwgcmVwbyBz ZXJ2ZXIgcGFja2FnZXMgLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0NCj4+Pj4+PiAyMC4x MXMNCj4+Pj4+PiBtZW1jYWNoZWRfc2VydmVyIDogSW5zdGFsbCBkaXN0cm8gcGFja2FnZXMgLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0NCj4+Pj4+PiAxNi4zNXMNCj4+Pj4+PiByZXBvX2J1 aWxkIDogQ3JlYXRlIHZlbnYgYnVpbGQgb3B0aW9ucyBmaWxlcyAtLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0NCj4+Pj4+PiAxNC41N3MNCj4+Pj4+PiBoYXByb3h5X3NlcnZlciA6IEluc3RhbGwg SEFQcm94eSBQYWNrYWdlcw0KPj4+Pj4+IC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0g OC4zNXMNCj4+Pj4+PiByc3lzbG9nX2NsaWVudCA6IEluc3RhbGwgcnN5c2xvZyBwYWNrYWdlcw0K Pj4+Pj4+IC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0gOC4zM3MNCj4+Pj4+PiByc3lz bG9nX2NsaWVudCA6IEluc3RhbGwgcnN5c2xvZyBwYWNrYWdlcw0KPj4+Pj4+IC0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0gNy42NHMNCj4+Pj4+PiByc3lzbG9nX2NsaWVudCA6IEluc3Rh bGwgcnN5c2xvZyBwYWNrYWdlcw0KPj4+Pj4+IC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0gNy40MnMNCj4+Pj4+PiByZXBvX2J1aWxkIDogV2FpdCBmb3IgZ2l0IGNsb25lcyB0byBjb21w bGV0ZQ0KPj4+Pj4+IC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0gNy4yNXMNCj4+Pj4+PiBy ZXBvX3NlcnZlciA6IEluc3RhbGwgcmVwbyBjYWNoaW5nIHNlcnZlciBwYWNrYWdlcw0KPj4+Pj4+ IC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0gNC43NnMNCj4+Pj4+PiBnYWxlcmFfc2VydmVyIDogQ2hl Y2sgdGhhdCBXU1JFUCBpcyByZWFkeQ0KPj4+Pj4+IC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0gNC4xOHMNCj4+Pj4+PiByZXBvX3NlcnZlciA6IEdpdCBzZXJ2aWNlIGRhdGEgZm9sZGVy IHNldHVwDQo+Pj4+Pj4gLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0gNC4wNHMNCj4+Pj4+ PiArKyBleGl0X2ZhaWwgMzQxIDANCj4+Pj4+PiArKyBzZXQgK3gNCj4+Pj4+PiArKyBpbmZvX2Js b2NrICdFcnJvciBJbmZvIC0gMzQxJyAwDQo+Pj4+Pj4gKysgZWNobw0KPj4+Pj4+IC0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0NCj4+Pj4+PiAtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tDQo+Pj4+Pj4gKysgcHJpbnRfaW5mbyAnRXJyb3Ig SW5mbyAtIDM0MScgMA0KPj4+Pj4+ICsrIFBST0NfTkFNRT0nLSBbIEVycm9yIEluZm8gLSAzNDEg MCBdIC0nDQo+Pj4+Pj4gKysgcHJpbnRmICdcbiVzJXNcbicgJy0gWyBFcnJvciBJbmZvIC0gMzQx IDAgXSAtJw0KPj4+Pj4+IC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tDQo+Pj4+Pj4gDQo+Pj4+Pj4gLSBbIEVycm9yIEluZm8gLSAzNDEgMCBdIC0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQ0KPj4+Pj4+ICsrIGVjaG8NCj4+Pj4+ PiAtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tDQo+Pj4+Pj4gLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQ0KPj4+Pj4+ICsrIGV4aXRfc3Rh dGUgMQ0KPj4+Pj4+ICsrIHNldCAreA0KPj4+Pj4+IC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0NCj4+Pj4+PiANCj4+ Pj4+PiAtIFsgUnVuIFRpbWUgPSAyMDMwIHNlY29uZHMgfHwgMzMgbWludXRlcyBdIC0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tDQo+Pj4+Pj4gLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQ0KPj4+Pj4+IC0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0NCj4+Pj4+PiANCj4+Pj4+PiAtIFsgU3RhdHVzOiBGYWlsdXJlIF0gLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tDQo+Pj4+Pj4gLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LQ0KPj4+Pj4+IA0KPj4+Pj4+IA0KPj4+Pj4+IA0KPj4+Pj4+IA0KPj4+Pj4+IEkgZG9uJ3Qga25v dyB3aHkgaXQgZmFpbGVkDQo+Pj4+Pj4gDQo+Pj4+Pj4gYnV0IGkgdHJpZWQgZm9sbG93aW5nOg0K Pj4+Pj4+IA0KPj4+Pj4+IFtyb290QGFpbyB+XSMgbHhjLWxzIC1mDQo+Pj4+Pj4gTkFNRSAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBTVEFURSAgIEFVVE9TVEFSVCBHUk9V UFMNCj4+Pj4+PiAgICAgICAgSVBWNCAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgICAgICBJUFY2DQo+Pj4+Pj4gYWlvMV9jaW5kZXJfYXBpX2NvbnRhaW5lci0yYWY0ZGQwMSAg ICAgICAgICBSVU5OSU5HIDEgICAgICAgICBvbmJvb3QsDQo+Pj4+Pj4gb3BlbnN0YWNrIDEwLjI1 NS4yNTUuNjIsIDE3Mi4yOS4yMzguMjEwLCAxNzIuMjkuMjQ0LjE1MiAgLQ0KPj4+Pj4+IGFpbzFf Y2luZGVyX3NjaGVkdWxlcl9jb250YWluZXItNDU0ZGIxZmIgICAgUlVOTklORyAxICAgICAgICAg b25ib290LA0KPj4+Pj4+IG9wZW5zdGFjayAxMC4yNTUuMjU1LjExNywgMTcyLjI5LjIzOS4xNzIg ICAgICAgICAgICAgICAgIC0NCj4+Pj4+PiBhaW8xX2Rlc2lnbmF0ZV9jb250YWluZXItZjdlYTNm NzMgICAgICAgICAgIFJVTk5JTkcgMSAgICAgICAgIG9uYm9vdCwNCj4+Pj4+PiBvcGVuc3RhY2sg MTAuMjU1LjI1NS4yMzUsIDE3Mi4yOS4yMzkuMTY2ICAgICAgICAgICAgICAgICAtDQo+Pj4+Pj4g YWlvMV9nYWxlcmFfY29udGFpbmVyLTRmNDg4ZjZhICAgICAgICAgICAgICBSVU5OSU5HIDEgICAg ICAgICBvbmJvb3QsDQo+Pj4+Pj4gb3BlbnN0YWNrIDEwLjI1NS4yNTUuMTkzLCAxNzIuMjkuMjM2 LjY5ICAgICAgICAgICAgICAgICAgLQ0KPj4+Pj4+IGFpbzFfZ2xhbmNlX2NvbnRhaW5lci1mOGNh YTllNiAgICAgICAgICAgICAgUlVOTklORyAxICAgICAgICAgb25ib290LA0KPj4+Pj4+IG9wZW5z dGFjayAxMC4yNTUuMjU1LjIyNSwgMTcyLjI5LjIzOS41MiwgMTcyLjI5LjI0Ni4yNSAgIC0NCj4+ Pj4+PiBhaW8xX2hlYXRfYXBpX2NvbnRhaW5lci04MzIxYTc2MyAgICAgICAgICAgIFJVTk5JTkcg MSAgICAgICAgIG9uYm9vdCwNCj4+Pj4+PiBvcGVuc3RhY2sgMTAuMjU1LjI1NS4xMDQsIDE3Mi4y OS4yMzYuMTg2ICAgICAgICAgICAgICAgICAtDQo+Pj4+Pj4gYWlvMV9oZWF0X2FwaXNfY29udGFp bmVyLTNmNzBhZDc0ICAgICAgICAgICBSVU5OSU5HIDEgICAgICAgICBvbmJvb3QsDQo+Pj4+Pj4g b3BlbnN0YWNrIDEwLjI1NS4yNTUuMTY2LCAxNzIuMjkuMjM5LjEzICAgICAgICAgICAgICAgICAg LQ0KPj4+Pj4+IGFpbzFfaGVhdF9lbmdpbmVfY29udGFpbmVyLWExOGU1YTBhICAgICAgICAgUlVO TklORyAxICAgICAgICAgb25ib290LA0KPj4+Pj4+IG9wZW5zdGFjayAxMC4yNTUuMjU1LjExOCwg MTcyLjI5LjIzOC43ICAgICAgICAgICAgICAgICAgIC0NCj4+Pj4+PiBhaW8xX2hvcml6b25fY29u dGFpbmVyLWU0OTMyNzVjICAgICAgICAgICAgIFJVTk5JTkcgMSAgICAgICAgIG9uYm9vdCwNCj4+ Pj4+PiBvcGVuc3RhY2sgMTAuMjU1LjI1NS45OCwgMTcyLjI5LjIzNy40MyAgICAgICAgICAgICAg ICAgICAtDQo+Pj4+Pj4gYWlvMV9rZXlzdG9uZV9jb250YWluZXItYzBlMjNlMTQgICAgICAgICAg ICBSVU5OSU5HIDEgICAgICAgICBvbmJvb3QsDQo+Pj4+Pj4gb3BlbnN0YWNrIDEwLjI1NS4yNTUu NjAsIDE3Mi4yOS4yMzcuMTY1ICAgICAgICAgICAgICAgICAgLQ0KPj4+Pj4+IGFpbzFfbWVtY2Fj aGVkX2NvbnRhaW5lci1lZjhmZWQ0YyAgICAgICAgICAgUlVOTklORyAxICAgICAgICAgb25ib290 LA0KPj4+Pj4+IG9wZW5zdGFjayAxMC4yNTUuMjU1LjIxNCwgMTcyLjI5LjIzOC4yMTEgICAgICAg ICAgICAgICAgIC0NCj4+Pj4+PiBhaW8xX25ldXRyb25fYWdlbnRzX2NvbnRhaW5lci0xMzFlOTk2 ZSAgICAgIFJVTk5JTkcgMSAgICAgICAgIG9uYm9vdCwNCj4+Pj4+PiBvcGVuc3RhY2sgMTAuMjU1 LjI1NS4xNTMsIDE3Mi4yOS4yMzcuMjQ2LCAxNzIuMjkuMjQzLjIyNyAtDQo+Pj4+Pj4gYWlvMV9u ZXV0cm9uX3NlcnZlcl9jb250YWluZXItY2NkNjkzOTQgICAgICBSVU5OSU5HIDEgICAgICAgICBv bmJvb3QsDQo+Pj4+Pj4gb3BlbnN0YWNrIDEwLjI1NS4yNTUuMjcsIDE3Mi4yOS4yMzYuMTI5ICAg ICAgICAgICAgICAgICAgLQ0KPj4+Pj4+IGFpbzFfbm92YV9hcGlfY29udGFpbmVyLTczMjc0MDI0 ICAgICAgICAgICAgUlVOTklORyAxICAgICAgICAgb25ib290LA0KPj4+Pj4+IG9wZW5zdGFjayAx MC4yNTUuMjU1LjQyLCAxNzIuMjkuMjM4LjIwMSAgICAgICAgICAgICAgICAgIC0NCj4+Pj4+PiBh aW8xX25vdmFfYXBpX21ldGFkYXRhX2NvbnRhaW5lci1hMWQzMjI4MiAgIFJVTk5JTkcgMSAgICAg ICAgIG9uYm9vdCwNCj4+Pj4+PiBvcGVuc3RhY2sgMTAuMjU1LjI1NS4yMTgsIDE3Mi4yOS4yMzgu MTUzICAgICAgICAgICAgICAgICAtDQo+Pj4+Pj4gYWlvMV9ub3ZhX2FwaV9vc19jb21wdXRlX2Nv bnRhaW5lci01MjcyNTk0MCBSVU5OSU5HIDEgICAgICAgICBvbmJvb3QsDQo+Pj4+Pj4gb3BlbnN0 YWNrIDEwLjI1NS4yNTUuMTA5LCAxNzIuMjkuMjM2LjEyNiAgICAgICAgICAgICAgICAgLQ0KPj4+ Pj4+IGFpbzFfbm92YV9hcGlfcGxhY2VtZW50X2NvbnRhaW5lci0wNThlODAzMSAgUlVOTklORyAx ICAgICAgICAgb25ib290LA0KPj4+Pj4+IG9wZW5zdGFjayAxMC4yNTUuMjU1LjI5LCAxNzIuMjku MjM2LjE1NyAgICAgICAgICAgICAgICAgIC0NCj4+Pj4+PiBhaW8xX25vdmFfY29uZHVjdG9yX2Nv bnRhaW5lci05YjZiMjA4YyAgICAgIFJVTk5JTkcgMSAgICAgICAgIG9uYm9vdCwNCj4+Pj4+PiBv cGVuc3RhY2sgMTAuMjU1LjI1NS4xOCwgMTcyLjI5LjIzOS45ICAgICAgICAgICAgICAgICAgICAt DQo+Pj4+Pj4gYWlvMV9ub3ZhX2NvbnNvbGVfY29udGFpbmVyLTBmYjg5OTVjICAgICAgICBSVU5O SU5HIDEgICAgICAgICBvbmJvb3QsDQo+Pj4+Pj4gb3BlbnN0YWNrIDEwLjI1NS4yNTUuNDcsIDE3 Mi4yOS4yMzcuMTI5ICAgICAgICAgICAgICAgICAgLQ0KPj4+Pj4+IGFpbzFfbm92YV9zY2hlZHVs ZXJfY29udGFpbmVyLThmN2E2NTdhICAgICAgUlVOTklORyAxICAgICAgICAgb25ib290LA0KPj4+ Pj4+IG9wZW5zdGFjayAxMC4yNTUuMjU1LjE5NSwgMTcyLjI5LjIzOC4xMTMgICAgICAgICAgICAg ICAgIC0NCj4+Pj4+PiBhaW8xX3JhYmJpdF9tcV9jb250YWluZXItYzM0NTBkNjYgICAgICAgICAg IFJVTk5JTkcgMSAgICAgICAgIG9uYm9vdCwNCj4+Pj4+PiBvcGVuc3RhY2sgMTAuMjU1LjI1NS4x MTEsIDE3Mi4yOS4yMzcuMjAyICAgICAgICAgICAgICAgICAtDQo+Pj4+Pj4gYWlvMV9yZXBvX2Nv bnRhaW5lci04ZTA3ZmRlZiAgICAgICAgICAgICAgICBSVU5OSU5HIDEgICAgICAgICBvbmJvb3Qs DQo+Pj4+Pj4gb3BlbnN0YWNrIDEwLjI1NS4yNTUuMTQxLCAxNzIuMjkuMjM5Ljc5ICAgICAgICAg ICAgICAgICAgLQ0KPj4+Pj4+IGFpbzFfcnN5c2xvZ19jb250YWluZXItYjE5OGZiZTUgICAgICAg ICAgICAgUlVOTklORyAxICAgICAgICAgb25ib290LA0KPj4+Pj4+IG9wZW5zdGFjayAxMC4yNTUu MjU1LjEzLCAxNzIuMjkuMjM2LjE5NSAgICAgICAgICAgICAgICAgIC0NCj4+Pj4+PiBhaW8xX3N3 aWZ0X3Byb3h5X2NvbnRhaW5lci0xYTM1MzZlMSAgICAgICAgIFJVTk5JTkcgMSAgICAgICAgIG9u Ym9vdCwNCj4+Pj4+PiBvcGVuc3RhY2sgMTAuMjU1LjI1NS4xMDgsIDE3Mi4yOS4yMzcuMzEsIDE3 Mi4yOS4yNDQuMjQ4ICAtDQo+Pj4+Pj4gYWlvMV91dGlsaXR5X2NvbnRhaW5lci1iZDEwNmYxMSAg ICAgICAgICAgICBSVU5OSU5HIDEgICAgICAgICBvbmJvb3QsDQo+Pj4+Pj4gb3BlbnN0YWNrIDEw LjI1NS4yNTUuNTQsIDE3Mi4yOS4yMzkuMTI0ICAgICAgICAgICAgICAgICAgLQ0KPj4+Pj4+IFty b290QGFpbyB+XSMgbHhjLWENCj4+Pj4+PiBseGMtYXR0YWNoICAgICBseGMtYXV0b3N0YXJ0DQo+ Pj4+Pj4gW3Jvb3RAYWlvIH5dIyBseGMtYXR0YWNoIC1uIGFpbzFfdXRpbGl0eV9jb250YWluZXIt YmQxMDZmMTENCj4+Pj4+PiBbcm9vdEBhaW8xLXV0aWxpdHktY29udGFpbmVyLWJkMTA2ZjExIH5d Iw0KPj4+Pj4+IFtyb290QGFpbzEtdXRpbGl0eS1jb250YWluZXItYmQxMDZmMTEgfl0jIHNvdXJj ZSAvcm9vdC9vcGVucmMNCj4+Pj4+PiBbcm9vdEBhaW8xLXV0aWxpdHktY29udGFpbmVyLWJkMTA2 ZjExIH5dIyBvcGVuc3RhY2sNCj4+Pj4+PiBvcGVuc3RhY2sgICAgICAgICAgICAgICAgICAgICAg ICAgb3BlbnN0YWNrLWhvc3QtaG9zdGZpbGUtc2V0dXAuc2gNCj4+Pj4+PiBbcm9vdEBhaW8xLXV0 aWxpdHktY29udGFpbmVyLWJkMTA2ZjExIH5dIyBvcGVuc3RhY2sNCj4+Pj4+PiBvcGVuc3RhY2sg ICAgICAgICAgICAgICAgICAgICAgICAgb3BlbnN0YWNrLWhvc3QtaG9zdGZpbGUtc2V0dXAuc2gN Cj4+Pj4+PiBbcm9vdEBhaW8xLXV0aWxpdHktY29udGFpbmVyLWJkMTA2ZjExIH5dIyBvcGVuc3Rh Y2sgdXNlciBsaXN0DQo+Pj4+Pj4gRmFpbGVkIHRvIGRpc2NvdmVyIGF2YWlsYWJsZSBpZGVudGl0 eSB2ZXJzaW9ucyB3aGVuIGNvbnRhY3RpbmcNCj4+Pj4+PiBodHRwOi8vMTcyLjI5LjIzNi4xMDA6 NTAwMC92My4gQXR0ZW1wdGluZyB0byBwYXJzZSB2ZXJzaW9uIGZyb20gVVJMLg0KPj4+Pj4+IFNl cnZpY2UgVW5hdmFpbGFibGUgKEhUVFAgNTAzKQ0KPj4+Pj4+IFtyb290QGFpbzEtdXRpbGl0eS1j b250YWluZXItYmQxMDZmMTEgfl0jDQo+Pj4+Pj4gDQo+Pj4+Pj4gDQo+Pj4+Pj4gbm90IHN1cmUg d2hhdCBpcyB0aGlzIGVycm9yID8NCj4+Pj4+PiANCj4+Pj4+PiANCj4+Pj4+PiBPbiBTYXQsIEZl YiAzLCAyMDE4IGF0IDc6MjkgUE0sIFNhdGlzaCBQYXRlbCA8c2F0aXNoLnR4dEBnbWFpbC5jb20+ DQo+Pj4+Pj4gd3JvdGU6DQo+Pj4+Pj4+IEkgaGF2ZSB0aXJlZCBldmVyeXRoaW5nIGJ1dCBkaWRu J3QgYWJsZSB0byBmaW5kIHNvbHV0aW9uIDooICB3aGF0IGkgYW0NCj4+Pj4+Pj4gZG9pbmcgd3Jv bmcgaGVyZSwgaSBhbSBmb2xsb3dpbmcgdGhpcyBpbnN0cnVjdGlvbiBhbmQgcGxlYXNlIGxldCBt ZQ0KPj4+Pj4+PiBrbm93IGlmIGkgYW0gd3JvbmcNCj4+Pj4+Pj4gDQo+Pj4+Pj4+IA0KPj4+Pj4+ PiBodHRwczovL2RldmVsb3Blci5yYWNrc3BhY2UuY29tL2Jsb2cvbGlmZS13aXRob3V0LWRldnN0 YWNrLW9wZW5zdGFjay1kZXZlbG9wbWVudC13aXRoLW9zYS8NCj4+Pj4+Pj4gDQo+Pj4+Pj4+IEkg aGF2ZSBDZW50T1M3LCB3aXRoIDggQ1BVIGFuZCAxNkdCIG1lbW9yeSB3aXRoIDEwMEdCIGRpc2sg c2l6ZS4NCj4+Pj4+Pj4gDQo+Pj4+Pj4+IEVycm9yOiBodHRwOi8vcGFzdGUub3BlbnN0YWNrLm9y Zy9zaG93LzY2MDQ5Ny8NCj4+Pj4+Pj4gDQo+Pj4+Pj4+IA0KPj4+Pj4+PiBJIGhhdmUgdGlyZWQg Z2F0ZS1jaGVjay1jb21taXQuc2ggYnV0IHNhbWUgZXJyb3IgOigNCj4+Pj4+Pj4gDQo+Pj4+Pj4+ IA0KPj4+Pj4+PiANCj4+Pj4+Pj4gT24gU2F0LCBGZWIgMywgMjAxOCBhdCAxOjExIEFNLCBTYXRp c2ggUGF0ZWwgPHNhdGlzaC50eHRAZ21haWwuY29tPg0KPj4+Pj4+PiB3cm90ZToNCj4+Pj4+Pj4+ IEkgaGF2ZSBzdGFydGVkIHBsYXlpbmcgd2l0aCBvcGVuc3RhY2stYW5zaWJsZSBvbiBDZW50T1M3 IGFuZCB0cnlpbmcgdG8NCj4+Pj4+Pj4+IGluc3RhbGwgQWxsLWluLW9uZSBidXQgZ290IHRoaXMg ZXJyb3IgYW5kIG5vdCBzdXJlIHdoYXQgY2F1c2UgdGhhdA0KPj4+Pj4+Pj4gZXJyb3IgaG93IGRv IGkgdHJvdWJsZXNob290IGl0Pw0KPj4+Pj4+Pj4gDQo+Pj4+Pj4+PiANCj4+Pj4+Pj4+IFRBU0sg W2Jvb3RzdHJhcC1ob3N0IDogUmVtb3ZlIGFuIGV4aXN0aW5nIHByaXZhdGUvcHVibGljIHNzaCBr ZXlzIGlmDQo+Pj4+Pj4+PiBvbmUgaXMgbWlzc2luZ10NCj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4gKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqDQo+Pj4+Pj4+PiBza2lwcGluZzogW2xvY2FsaG9zdF0gPT4gKGl0ZW09aWRf cnNhKQ0KPj4+Pj4+Pj4gc2tpcHBpbmc6IFtsb2NhbGhvc3RdID0+IChpdGVtPWlkX3JzYS5wdWIp DQo+Pj4+Pj4+PiANCj4+Pj4+Pj4+IFRBU0sgW2Jvb3RzdHJhcC1ob3N0IDogQ3JlYXRlIHNzaCBr ZXkgcGFpciBmb3Igcm9vdF0NCj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4gKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioNCj4+Pj4+Pj4+IG9rOiBbbG9jYWxob3N0XQ0K Pj4+Pj4+Pj4gDQo+Pj4+Pj4+PiBUQVNLIFtib290c3RyYXAtaG9zdCA6IEZldGNoIHRoZSBnZW5l cmF0ZWQgcHVibGljIHNzaCBrZXldDQo+Pj4+Pj4+PiANCj4+Pj4+Pj4+ICoqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqDQo+Pj4+Pj4+PiBjaGFuZ2VkOiBbbG9jYWxob3N0 XQ0KPj4+Pj4+Pj4gDQo+Pj4+Pj4+PiBUQVNLIFtib290c3RyYXAtaG9zdCA6IEVuc3VyZSByb290 J3MgbmV3IHB1YmxpYyBzc2gga2V5IGlzIGluDQo+Pj4+Pj4+PiBhdXRob3JpemVkX2tleXNdDQo+ Pj4+Pj4+PiANCj4+Pj4+Pj4+ICoqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKg0KPj4+Pj4+Pj4gb2s6IFts b2NhbGhvc3RdDQo+Pj4+Pj4+PiANCj4+Pj4+Pj4+IFRBU0sgW2Jvb3RzdHJhcC1ob3N0IDogQ3Jl YXRlIHRoZSByZXF1aXJlZCBkZXBsb3ltZW50IGRpcmVjdG9yaWVzXQ0KPj4+Pj4+Pj4gDQo+Pj4+ Pj4+PiAqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioNCj4+Pj4+Pj4+IGNoYW5nZWQ6 IFtsb2NhbGhvc3RdID0+IChpdGVtPS9ldGMvb3BlbnN0YWNrX2RlcGxveSkNCj4+Pj4+Pj4+IGNo YW5nZWQ6IFtsb2NhbGhvc3RdID0+IChpdGVtPS9ldGMvb3BlbnN0YWNrX2RlcGxveS9jb25mLmQp DQo+Pj4+Pj4+PiBjaGFuZ2VkOiBbbG9jYWxob3N0XSA9PiAoaXRlbT0vZXRjL29wZW5zdGFja19k ZXBsb3kvZW52LmQpDQo+Pj4+Pj4+PiANCj4+Pj4+Pj4+IFRBU0sgW2Jvb3RzdHJhcC1ob3N0IDog RGVwbG95IHVzZXIgY29uZi5kIGNvbmZpZ3VyYXRpb25dDQo+Pj4+Pj4+PiANCj4+Pj4+Pj4+ICoq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioNCj4+Pj4+Pj4+IGZhdGFs OiBbbG9jYWxob3N0XTogRkFJTEVEISA9PiB7Im1zZyI6ICJ7ew0KPj4+Pj4+Pj4gY29uZmRfb3Zl cnJpZGVzW2Jvb3RzdHJhcF9ob3N0X3NjZW5hcmlvXSB9fTogJ2RpY3Qgb2JqZWN0JyBoYXMgbm8N Cj4+Pj4+Pj4+IGF0dHJpYnV0ZSB1J2FpbycifQ0KPj4+Pj4+Pj4gDQo+Pj4+Pj4+PiBSVU5OSU5H IEhBTkRMRVIgW3NzaGQgOiBSZWxvYWQgdGhlIFNTSCBzZXJ2aWNlXQ0KPj4+Pj4+Pj4gDQo+Pj4+ Pj4+PiAqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq DQo+Pj4+Pj4+PiAgICAgICB0byByZXRyeSwgdXNlOiAtLWxpbWl0DQo+Pj4+Pj4+PiBAL29wdC9v cGVuc3RhY2stYW5zaWJsZS90ZXN0cy9ib290c3RyYXAtYWlvLnJldHJ5DQo+Pj4+Pj4+PiANCj4+ Pj4+Pj4+IFBMQVkgUkVDQVANCj4+Pj4+Pj4+ICoqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqDQo+Pj4+Pj4+PiBsb2NhbGhvc3QgICAgICAgICAgICAgICAgICA6IG9rPTYxICAgY2hhbmdl ZD0zNiAgIHVucmVhY2hhYmxlPTANCj4+Pj4+Pj4+IGZhaWxlZD0yDQo+Pj4+Pj4+PiANCj4+Pj4+ Pj4+IFtyb290QGFpbyBvcGVuc3RhY2stYW5zaWJsZV0jDQo+Pj4+PiANCj4+Pj4+IF9fX19fX19f X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fDQo+Pj4+PiBNYWlsaW5nIGxp c3Q6DQo+Pj4+PiBodHRwOi8vbGlzdHMub3BlbnN0YWNrLm9yZy9jZ2ktYmluL21haWxtYW4vbGlz dGluZm8vb3BlbnN0YWNrDQo+Pj4+PiBQb3N0IHRvICAgICA6IG9wZW5zdGFja0BsaXN0cy5vcGVu c3RhY2sub3JnDQo+Pj4+PiBVbnN1YnNjcmliZSA6DQo+Pj4+PiBodHRwOi8vbGlzdHMub3BlbnN0 YWNrLm9yZy9jZ2ktYmluL21haWxtYW4vbGlzdGluZm8vb3BlbnN0YWNrDQo+Pj4+IA0KPj4+PiAN Cj4+PiANCj4+PiBfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f Xw0KPj4+IE1haWxpbmcgbGlzdDogaHR0cDovL2xpc3RzLm9wZW5zdGFjay5vcmcvY2dpLWJpbi9t YWlsbWFuL2xpc3RpbmZvL29wZW5zdGFjaw0KPj4+IFBvc3QgdG8gICAgIDogb3BlbnN0YWNrQGxp c3RzLm9wZW5zdGFjay5vcmcNCj4+PiBVbnN1YnNjcmliZSA6IGh0dHA6Ly9saXN0cy5vcGVuc3Rh Y2sub3JnL2NnaS1iaW4vbWFpbG1hbi9saXN0aW5mby9vcGVuc3RhY2sNCg== --=_6a49388620fb8de39e6597971edffd31-- From remo at italy1.com Sun Feb 4 19:05:08 2018 From: remo at italy1.com (remo at italy1.com) Date: Sun, 4 Feb 2018 11:05:08 -0800 Subject: [Openstack] openstack-ansible aio error In-Reply-To: References: <2D5C1208-BA0A-491D-B232-6B0AB3EBC8CF@italy1.com> Message-ID: <1C807435-61CA-4A09-AD82-C6F3B91CEC03@italy1.com> Content-Type: multipart/alternative; boundary="=_6a49388620fb8de39e6597971edffd31" --=_6a49388620fb8de39e6597971edffd31 Content-Transfer-Encoding: base64 Content-Type: text/plain; charset=utf-8 VHJpcGxlbyA9IGhhDQpQYWNrc3RhY2sgPSBubyBoYQ0KDQo+IElsIGdpb3JubyAwNCBmZWIgMjAx OCwgYWxsZSBvcmUgMTE6MDAsIFNhdGlzaCBQYXRlbCA8c2F0aXNoLnR4dEBnbWFpbC5jb20+IGhh IHNjcml0dG86DQo+IA0KPiBKdXN0IHdvbmRlcmluZyB3aHkgZGlkIHlvdSBzYXkgd2UgY2FuJ3Qg ZG8gSEEgd2l0aCBUcmlwbGVPPyAgSSB0aG91Z2h0DQo+IGl0IGRvZXMgc3VwcG9ydCBIQS4gYW0g aSBtaXNzaW5nIHNvbWV0aGluZyBoZXJlPw0KPiANCj4+IE9uIFN1biwgRmViIDQsIDIwMTggYXQg MTE6MjEgQU0sICA8cmVtb0BpdGFseTEuY29tPiB3cm90ZToNCj4+IFdoYXQgYXJlIHlvdSBsb29r aW5nIGZvciBoYT8gRXRjLiBUcmlwbGVvIGlzIHRoZSB3YXkgdG8gZ28gZm9yIHRoYXQgcGFja3N0 YWNrIGlmIHlvdSB3YW50IHNpbXBsZSBkZXBsb3ltZW50IGJ1dCBubyBoYSBvZiBjb3Vyc2UuDQo+ PiANCj4+PiBJbCBnaW9ybm8gMDQgZmViIDIwMTgsIGFsbGUgb3JlIDA3OjUzLCBTYXRpc2ggUGF0 ZWwgPHNhdGlzaC50eHRAZ21haWwuY29tPiBoYSBzY3JpdHRvOg0KPj4+IA0KPj4+IEhpIE1hcmNp biwNCj4+PiANCj4+PiBUaGFuayB5b3UsIGkgd2lsbCB0cnkgb3RoZXIgbGluaywgYWxzbyBpIGFt IHVzaW5nIENlbnRPUzcgYnV0IGFueXdheQ0KPj4+IG5vdyBxdWVzdGlvbiBpcyBkb2VzIG9wZW5z dGFjay1hbnNpYmxlIHJlYWR5IGZvciBwcm9kdWN0aW9uIGRlcGxveW1lbnQNCj4+PiBkZXNwaXRl IGdhbGVyYSBpc3N1ZXMgYW5kIGJ1Zz8NCj4+PiANCj4+PiBJZiBpIHdhbnQgdG8gZ28gb24gcHJv ZHVjdGlvbiBzaG91bGQgaSB3YWl0IG9yIGZpbmQgb3RoZXIgdG9vbHMgdG8NCj4+PiBkZXBsb3kg b24gcHJvZHVjdGlvbj8NCj4+PiANCj4+Pj4gT24gU3VuLCBGZWIgNCwgMjAxOCBhdCA1OjI5IEFN LCBNYXJjaW4gRHVsYWsgPG1hcmNpbi5kdWxha0BnbWFpbC5jb20+IHdyb3RlOg0KPj4+PiBXaGVu IHBsYXlpbmcgd2l0aCBvcGVuc3RhY2stYW5zaWJsZSBkbyBpdCBpbiBhIHZpcnR1YWwgc2V0dXAg KGUuZy4gbmVzdGVkDQo+Pj4+IHZpcnR1YWxpemF0aW9uIHdpdGggbGlidmlydCkgc28geW91IGNh biByZXByb2R1Y2libHkgYnJpbmcgdXAgeW91cg0KPj4+PiBlbnZpcm9ubWVudCBmcm9tIHNjcmF0 Y2guDQo+Pj4+IFlvdSB3aWxsIGhhdmUgdG8gZG8gaXQgbXVsdGlwbGUgdGltZXMuDQo+Pj4+IA0K Pj4+PiBodHRwczovL2RldmVsb3Blci5yYWNrc3BhY2UuY29tL2Jsb2cvbGlmZS13aXRob3V0LWRl dnN0YWNrLW9wZW5zdGFjay1kZXZlbG9wbWVudC13aXRoLW9zYS8NCj4+Pj4gaXMgbW9yZSB0aGFu IDIgeWVhcnMgb2xkLg0KPj4+PiANCj4+Pj4gVHJ5IHRvIGZvbGxvdw0KPj4+PiBodHRwczovL2Rv Y3Mub3BlbnN0YWNrLm9yZy9vcGVuc3RhY2stYW5zaWJsZS9sYXRlc3QvY29udHJpYnV0b3IvcXVp Y2tzdGFydC1haW8uaHRtbA0KPj4+PiBidXQgZ2l0IGNsb25lIHRoZSBsYXRlc3Qgc3RhdGUgb2Yg dGhlIG9wZW5zdGFjay1hbnNpYmxlIHJlcG8uDQo+Pj4+IFRoZSBhYm92ZSBwYWdlIGhhcyBhIGxp bmsgdGhhdCBjYW4gYmUgdXNlZCB0byBzdWJtaXQgYnVncyBkaXJlY3RseSB0byB0aGUNCj4+Pj4g b3BlbnN0YWNrLWFuc2libGUgcHJvamVjdCBhdCBsYXVuY2hwYWQuDQo+Pj4+IEluIHRoaXMgd2F5 IHlvdSBtYXkgYmUgYWJsZSB0byBjbGVhbnVwL2ltcHJvdmUgdGhlIGRvY3VtZW50YXRpb24sDQo+ Pj4+IGFuZCBzaW5jZSB5b3VyIHNldHVwIGlzIHRoZSBzaW1wbGVzdCBwb3NzaWJsZSBvbmUgeW91 ciBidWcgcmVwb3J0cyBtYXkgZ2V0DQo+Pj4+IG5vdGljZWQgYW5kIHJlcHJvZHVjZWQgYnkgdGhl IGRldmVsb3BlcnMuDQo+Pj4+IFdoYXQgaGFwcGVucyBpcyB0aGF0IG1vc3QgcGVvcGxlIHRyeSBv cGVuc3RhY2stYW5zaWJsZSwgZG9uJ3QgcmVwb3J0IGJ1Z3MsDQo+Pj4+IG9yIHJlcG9ydCB0aGUg YnVncyB3aXRob3V0IHRoZSBpbmZvcm1hdGlvbiBuZWNjZXNhcnkNCj4+Pj4gdG8gcmVwcm9kdWNl IHRoZW0sIGFuZCBhYmFuZG9uIHRoZSB3aG9sZSBpZGVhLg0KPj4+PiANCj4+Pj4gVHJ5IHRvIHNl YXJjaA0KPj4+PiBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9vcGVuc3RhY2stYW5zaWJsZS8r YnVncz9maWVsZC5zZWFyY2h0ZXh0PWdhbGVyYQ0KPj4+PiBmb3IgaW5zcGlyYXRpb24gYWJvdXQg d2hhdCB0byBkby4NCj4+Pj4gQ3VycmVudGx5IHRoZSBnYWxlcmEgc2V0dXAgaW4gb3BlbnN0YWNr LWFuc2libGUsIGVzcGVjaWFsbHkgb24gY2VudG9zNyBzZWVtcw0KPj4+PiB0byBiZSB1bmRlcmdv aW5nIHNvbWUgY3JpdGljYWwgY2hhbmdlcy4NCj4+Pj4gRW50ZXIgdGhlIGdhbGVyYSBjb250YWlu ZXI6DQo+Pj4+IGx4Yy1hdHRhY2ggLW4gYWlvMV9nYWxlcmFfY29udGFpbmVyLTRmNDg4ZjZhDQo+ Pj4+IGxvb2sgYXJvdW5kIGl0LCBjaGVjayB3aGV0aGVyIG15c3FsZCBpcyBydW5uaW5nIGV0Yy4s IHRyeSB0byBpZGVudGlmeSB3aGljaA0KPj4+PiBhbnNpYmxlIHRhc2tzIGZhaWxlZCBhbmQgcnVu IHRoZW0gbWFudWFsbHkgaW5zaWRlIG9mIHRoZSBjb250YWluZXIuDQo+Pj4+IA0KPj4+PiBNYXJj aW4NCj4+Pj4gDQo+Pj4+IA0KPj4+Pj4gT24gU3VuLCBGZWIgNCwgMjAxOCBhdCAzOjQxIEFNLCBT YXRpc2ggUGF0ZWwgPHNhdGlzaC50eHRAZ21haWwuY29tPiB3cm90ZToNCj4+Pj4+IA0KPj4+Pj4g SSBoYXZlIG5vdGljZWQgaW4gb3V0cHV0ICJhaW8xX2dhbGVyYV9jb250YWluZXIiIGlzIGZhaWxl ZCwgaG93IGRvIGkNCj4+Pj4+IGZpeGVkIHRoaXMga2luZCBvZiBpc3N1ZT8NCj4+Pj4+IA0KPj4+ Pj4gDQo+Pj4+PiANCj4+Pj4+IFBMQVkgUkVDQVANCj4+Pj4+ICoqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqDQo+Pj4+PiBhaW8xICAgICAgICAg ICAgICAgICAgICAgICA6IG9rPTQxICAgY2hhbmdlZD00ICAgIHVucmVhY2hhYmxlPTANCj4+Pj4+ IGZhaWxlZD0wDQo+Pj4+PiBhaW8xX2NpbmRlcl9hcGlfY29udGFpbmVyLTJhZjRkZDAxIDogb2s9 MCAgICBjaGFuZ2VkPTANCj4+Pj4+IHVucmVhY2hhYmxlPTAgICAgZmFpbGVkPTANCj4+Pj4+IGFp bzFfY2luZGVyX3NjaGVkdWxlcl9jb250YWluZXItNDU0ZGIxZmIgOiBvaz0wICAgIGNoYW5nZWQ9 MA0KPj4+Pj4gdW5yZWFjaGFibGU9MCAgICBmYWlsZWQ9MA0KPj4+Pj4gYWlvMV9kZXNpZ25hdGVf Y29udGFpbmVyLWY3ZWEzZjczIDogb2s9MCAgICBjaGFuZ2VkPTAgICAgdW5yZWFjaGFibGU9MA0K Pj4+Pj4gIGZhaWxlZD0wDQo+Pj4+PiBhaW8xX2dhbGVyYV9jb250YWluZXItNGY0ODhmNmEgOiBv az0zMiAgIGNoYW5nZWQ9MyAgICB1bnJlYWNoYWJsZT0wDQo+Pj4+PiBmYWlsZWQ9MQ0KPj4+Pj4g DQo+Pj4+Pj4gT24gU2F0LCBGZWIgMywgMjAxOCBhdCA5OjI2IFBNLCBTYXRpc2ggUGF0ZWwgPHNh dGlzaC50eHRAZ21haWwuY29tPiB3cm90ZToNCj4+Pj4+PiBJIGhhdmUgcmUtaW5zdGFsbCBjZW50 b3M3IGFuZCBnaXZlIGl0IGEgdHJ5IGFuZCBnb3QgdGhpcyBlcnJvcg0KPj4+Pj4+IA0KPj4+Pj4+ IERFQlVHIE1FU1NBR0UgUkVDQVANCj4+Pj4+PiAqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioNCj4+Pj4+PiBERUJVRzogW0xvYWQgbG9j YWwgcGFja2FnZXNdDQo+Pj4+Pj4gKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqKioqDQo+Pj4+Pj4gQWxsIGl0ZW1zIGNvbXBsZXRlZA0KPj4+Pj4+IA0KPj4+ Pj4+IFNhdHVyZGF5IDAzIEZlYnJ1YXJ5IDIwMTggIDIxOjA0OjA3IC0wNTAwICgwOjAwOjA0LjE3 NSkNCj4+Pj4+PiAwOjE2OjE3LjIwNCAqKioqKg0KPj4+Pj4+IA0KPj4+Pj4+ID09PT09PT09PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT09PT0NCj4+Pj4+PiByZXBvX2J1aWxkIDogQ3JlYXRlIE9wZW5TdGFjay1BbnNpYmxl IHJlcXVpcmVtZW50IHdoZWVscyAtLS0tLS0tLS0tLS0tLQ0KPj4+Pj4+IDI2OC4xNnMNCj4+Pj4+ PiByZXBvX2J1aWxkIDogV2FpdCBmb3IgdGhlIHZlbnZzIGJ1aWxkcyB0byBjb21wbGV0ZSAtLS0t LS0tLS0tLS0tLS0tLS0tLQ0KPj4+Pj4+IDExMC4zMHMNCj4+Pj4+PiByZXBvX2J1aWxkIDogSW5z dGFsbCBwYWNrYWdlcyAtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0N Cj4+Pj4+PiA2OC4yNnMNCj4+Pj4+PiByZXBvX2J1aWxkIDogQ2xvbmUgZ2l0IHJlcG9zaXRvcmll cyBhc3luY2hyb25vdXNseSAtLS0tLS0tLS0tLS0tLS0tLS0tLS0NCj4+Pj4+PiA1OS44NXMNCj4+ Pj4+PiBwaXBfaW5zdGFsbCA6IEluc3RhbGwgZGlzdHJvIHBhY2thZ2VzIC0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0NCj4+Pj4+PiAzNi43MnMNCj4+Pj4+PiBnYWxlcmFfY2xpZW50 IDogSW5zdGFsbCBnYWxlcmEgZGlzdHJvIHBhY2thZ2VzIC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0NCj4+Pj4+PiAzMy4yMXMNCj4+Pj4+PiBoYXByb3h5X3NlcnZlciA6IENyZWF0ZSBoYXByb3h5 IHNlcnZpY2UgY29uZmlnIGZpbGVzIC0tLS0tLS0tLS0tLS0tLS0tLS0NCj4+Pj4+PiAzMC44MXMN Cj4+Pj4+PiByZXBvX2J1aWxkIDogRXhlY3V0ZSB0aGUgdmVudiBidWlsZCBzY3JpcHRzIGFzeW5j aG9ub3VzbHkgLS0tLS0tLS0tLS0tLS0NCj4+Pj4+PiAyOS42OXMNCj4+Pj4+PiBwaXBfaW5zdGFs bCA6IEluc3RhbGwgZGlzdHJvIHBhY2thZ2VzIC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0NCj4+Pj4+PiAyMy41NnMNCj4+Pj4+PiByZXBvX3NlcnZlciA6IEluc3RhbGwgcmVwbyBz ZXJ2ZXIgcGFja2FnZXMgLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0NCj4+Pj4+PiAyMC4x MXMNCj4+Pj4+PiBtZW1jYWNoZWRfc2VydmVyIDogSW5zdGFsbCBkaXN0cm8gcGFja2FnZXMgLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0NCj4+Pj4+PiAxNi4zNXMNCj4+Pj4+PiByZXBvX2J1 aWxkIDogQ3JlYXRlIHZlbnYgYnVpbGQgb3B0aW9ucyBmaWxlcyAtLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0NCj4+Pj4+PiAxNC41N3MNCj4+Pj4+PiBoYXByb3h5X3NlcnZlciA6IEluc3RhbGwg SEFQcm94eSBQYWNrYWdlcw0KPj4+Pj4+IC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0g OC4zNXMNCj4+Pj4+PiByc3lzbG9nX2NsaWVudCA6IEluc3RhbGwgcnN5c2xvZyBwYWNrYWdlcw0K Pj4+Pj4+IC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0gOC4zM3MNCj4+Pj4+PiByc3lz bG9nX2NsaWVudCA6IEluc3RhbGwgcnN5c2xvZyBwYWNrYWdlcw0KPj4+Pj4+IC0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0gNy42NHMNCj4+Pj4+PiByc3lzbG9nX2NsaWVudCA6IEluc3Rh bGwgcnN5c2xvZyBwYWNrYWdlcw0KPj4+Pj4+IC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0gNy40MnMNCj4+Pj4+PiByZXBvX2J1aWxkIDogV2FpdCBmb3IgZ2l0IGNsb25lcyB0byBjb21w bGV0ZQ0KPj4+Pj4+IC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0gNy4yNXMNCj4+Pj4+PiBy ZXBvX3NlcnZlciA6IEluc3RhbGwgcmVwbyBjYWNoaW5nIHNlcnZlciBwYWNrYWdlcw0KPj4+Pj4+ IC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0gNC43NnMNCj4+Pj4+PiBnYWxlcmFfc2VydmVyIDogQ2hl Y2sgdGhhdCBXU1JFUCBpcyByZWFkeQ0KPj4+Pj4+IC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0gNC4xOHMNCj4+Pj4+PiByZXBvX3NlcnZlciA6IEdpdCBzZXJ2aWNlIGRhdGEgZm9sZGVy IHNldHVwDQo+Pj4+Pj4gLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0gNC4wNHMNCj4+Pj4+ PiArKyBleGl0X2ZhaWwgMzQxIDANCj4+Pj4+PiArKyBzZXQgK3gNCj4+Pj4+PiArKyBpbmZvX2Js b2NrICdFcnJvciBJbmZvIC0gMzQxJyAwDQo+Pj4+Pj4gKysgZWNobw0KPj4+Pj4+IC0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0NCj4+Pj4+PiAtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tDQo+Pj4+Pj4gKysgcHJpbnRfaW5mbyAnRXJyb3Ig SW5mbyAtIDM0MScgMA0KPj4+Pj4+ICsrIFBST0NfTkFNRT0nLSBbIEVycm9yIEluZm8gLSAzNDEg MCBdIC0nDQo+Pj4+Pj4gKysgcHJpbnRmICdcbiVzJXNcbicgJy0gWyBFcnJvciBJbmZvIC0gMzQx IDAgXSAtJw0KPj4+Pj4+IC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tDQo+Pj4+Pj4gDQo+Pj4+Pj4gLSBbIEVycm9yIEluZm8gLSAzNDEgMCBdIC0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQ0KPj4+Pj4+ICsrIGVjaG8NCj4+Pj4+ PiAtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tDQo+Pj4+Pj4gLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQ0KPj4+Pj4+ICsrIGV4aXRfc3Rh dGUgMQ0KPj4+Pj4+ICsrIHNldCAreA0KPj4+Pj4+IC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0NCj4+Pj4+PiANCj4+ Pj4+PiAtIFsgUnVuIFRpbWUgPSAyMDMwIHNlY29uZHMgfHwgMzMgbWludXRlcyBdIC0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tDQo+Pj4+Pj4gLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQ0KPj4+Pj4+IC0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0NCj4+Pj4+PiANCj4+Pj4+PiAtIFsgU3RhdHVzOiBGYWlsdXJlIF0gLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tDQo+Pj4+Pj4gLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LQ0KPj4+Pj4+IA0KPj4+Pj4+IA0KPj4+Pj4+IA0KPj4+Pj4+IA0KPj4+Pj4+IEkgZG9uJ3Qga25v dyB3aHkgaXQgZmFpbGVkDQo+Pj4+Pj4gDQo+Pj4+Pj4gYnV0IGkgdHJpZWQgZm9sbG93aW5nOg0K Pj4+Pj4+IA0KPj4+Pj4+IFtyb290QGFpbyB+XSMgbHhjLWxzIC1mDQo+Pj4+Pj4gTkFNRSAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBTVEFURSAgIEFVVE9TVEFSVCBHUk9V UFMNCj4+Pj4+PiAgICAgICAgSVBWNCAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgICAgICBJUFY2DQo+Pj4+Pj4gYWlvMV9jaW5kZXJfYXBpX2NvbnRhaW5lci0yYWY0ZGQwMSAg ICAgICAgICBSVU5OSU5HIDEgICAgICAgICBvbmJvb3QsDQo+Pj4+Pj4gb3BlbnN0YWNrIDEwLjI1 NS4yNTUuNjIsIDE3Mi4yOS4yMzguMjEwLCAxNzIuMjkuMjQ0LjE1MiAgLQ0KPj4+Pj4+IGFpbzFf Y2luZGVyX3NjaGVkdWxlcl9jb250YWluZXItNDU0ZGIxZmIgICAgUlVOTklORyAxICAgICAgICAg b25ib290LA0KPj4+Pj4+IG9wZW5zdGFjayAxMC4yNTUuMjU1LjExNywgMTcyLjI5LjIzOS4xNzIg ICAgICAgICAgICAgICAgIC0NCj4+Pj4+PiBhaW8xX2Rlc2lnbmF0ZV9jb250YWluZXItZjdlYTNm NzMgICAgICAgICAgIFJVTk5JTkcgMSAgICAgICAgIG9uYm9vdCwNCj4+Pj4+PiBvcGVuc3RhY2sg MTAuMjU1LjI1NS4yMzUsIDE3Mi4yOS4yMzkuMTY2ICAgICAgICAgICAgICAgICAtDQo+Pj4+Pj4g YWlvMV9nYWxlcmFfY29udGFpbmVyLTRmNDg4ZjZhICAgICAgICAgICAgICBSVU5OSU5HIDEgICAg ICAgICBvbmJvb3QsDQo+Pj4+Pj4gb3BlbnN0YWNrIDEwLjI1NS4yNTUuMTkzLCAxNzIuMjkuMjM2 LjY5ICAgICAgICAgICAgICAgICAgLQ0KPj4+Pj4+IGFpbzFfZ2xhbmNlX2NvbnRhaW5lci1mOGNh YTllNiAgICAgICAgICAgICAgUlVOTklORyAxICAgICAgICAgb25ib290LA0KPj4+Pj4+IG9wZW5z dGFjayAxMC4yNTUuMjU1LjIyNSwgMTcyLjI5LjIzOS41MiwgMTcyLjI5LjI0Ni4yNSAgIC0NCj4+ Pj4+PiBhaW8xX2hlYXRfYXBpX2NvbnRhaW5lci04MzIxYTc2MyAgICAgICAgICAgIFJVTk5JTkcg MSAgICAgICAgIG9uYm9vdCwNCj4+Pj4+PiBvcGVuc3RhY2sgMTAuMjU1LjI1NS4xMDQsIDE3Mi4y OS4yMzYuMTg2ICAgICAgICAgICAgICAgICAtDQo+Pj4+Pj4gYWlvMV9oZWF0X2FwaXNfY29udGFp bmVyLTNmNzBhZDc0ICAgICAgICAgICBSVU5OSU5HIDEgICAgICAgICBvbmJvb3QsDQo+Pj4+Pj4g b3BlbnN0YWNrIDEwLjI1NS4yNTUuMTY2LCAxNzIuMjkuMjM5LjEzICAgICAgICAgICAgICAgICAg LQ0KPj4+Pj4+IGFpbzFfaGVhdF9lbmdpbmVfY29udGFpbmVyLWExOGU1YTBhICAgICAgICAgUlVO TklORyAxICAgICAgICAgb25ib290LA0KPj4+Pj4+IG9wZW5zdGFjayAxMC4yNTUuMjU1LjExOCwg MTcyLjI5LjIzOC43ICAgICAgICAgICAgICAgICAgIC0NCj4+Pj4+PiBhaW8xX2hvcml6b25fY29u dGFpbmVyLWU0OTMyNzVjICAgICAgICAgICAgIFJVTk5JTkcgMSAgICAgICAgIG9uYm9vdCwNCj4+ Pj4+PiBvcGVuc3RhY2sgMTAuMjU1LjI1NS45OCwgMTcyLjI5LjIzNy40MyAgICAgICAgICAgICAg ICAgICAtDQo+Pj4+Pj4gYWlvMV9rZXlzdG9uZV9jb250YWluZXItYzBlMjNlMTQgICAgICAgICAg ICBSVU5OSU5HIDEgICAgICAgICBvbmJvb3QsDQo+Pj4+Pj4gb3BlbnN0YWNrIDEwLjI1NS4yNTUu NjAsIDE3Mi4yOS4yMzcuMTY1ICAgICAgICAgICAgICAgICAgLQ0KPj4+Pj4+IGFpbzFfbWVtY2Fj aGVkX2NvbnRhaW5lci1lZjhmZWQ0YyAgICAgICAgICAgUlVOTklORyAxICAgICAgICAgb25ib290 LA0KPj4+Pj4+IG9wZW5zdGFjayAxMC4yNTUuMjU1LjIxNCwgMTcyLjI5LjIzOC4yMTEgICAgICAg ICAgICAgICAgIC0NCj4+Pj4+PiBhaW8xX25ldXRyb25fYWdlbnRzX2NvbnRhaW5lci0xMzFlOTk2 ZSAgICAgIFJVTk5JTkcgMSAgICAgICAgIG9uYm9vdCwNCj4+Pj4+PiBvcGVuc3RhY2sgMTAuMjU1 LjI1NS4xNTMsIDE3Mi4yOS4yMzcuMjQ2LCAxNzIuMjkuMjQzLjIyNyAtDQo+Pj4+Pj4gYWlvMV9u ZXV0cm9uX3NlcnZlcl9jb250YWluZXItY2NkNjkzOTQgICAgICBSVU5OSU5HIDEgICAgICAgICBv bmJvb3QsDQo+Pj4+Pj4gb3BlbnN0YWNrIDEwLjI1NS4yNTUuMjcsIDE3Mi4yOS4yMzYuMTI5ICAg ICAgICAgICAgICAgICAgLQ0KPj4+Pj4+IGFpbzFfbm92YV9hcGlfY29udGFpbmVyLTczMjc0MDI0 ICAgICAgICAgICAgUlVOTklORyAxICAgICAgICAgb25ib290LA0KPj4+Pj4+IG9wZW5zdGFjayAx MC4yNTUuMjU1LjQyLCAxNzIuMjkuMjM4LjIwMSAgICAgICAgICAgICAgICAgIC0NCj4+Pj4+PiBh aW8xX25vdmFfYXBpX21ldGFkYXRhX2NvbnRhaW5lci1hMWQzMjI4MiAgIFJVTk5JTkcgMSAgICAg ICAgIG9uYm9vdCwNCj4+Pj4+PiBvcGVuc3RhY2sgMTAuMjU1LjI1NS4yMTgsIDE3Mi4yOS4yMzgu MTUzICAgICAgICAgICAgICAgICAtDQo+Pj4+Pj4gYWlvMV9ub3ZhX2FwaV9vc19jb21wdXRlX2Nv bnRhaW5lci01MjcyNTk0MCBSVU5OSU5HIDEgICAgICAgICBvbmJvb3QsDQo+Pj4+Pj4gb3BlbnN0 YWNrIDEwLjI1NS4yNTUuMTA5LCAxNzIuMjkuMjM2LjEyNiAgICAgICAgICAgICAgICAgLQ0KPj4+ Pj4+IGFpbzFfbm92YV9hcGlfcGxhY2VtZW50X2NvbnRhaW5lci0wNThlODAzMSAgUlVOTklORyAx ICAgICAgICAgb25ib290LA0KPj4+Pj4+IG9wZW5zdGFjayAxMC4yNTUuMjU1LjI5LCAxNzIuMjku MjM2LjE1NyAgICAgICAgICAgICAgICAgIC0NCj4+Pj4+PiBhaW8xX25vdmFfY29uZHVjdG9yX2Nv bnRhaW5lci05YjZiMjA4YyAgICAgIFJVTk5JTkcgMSAgICAgICAgIG9uYm9vdCwNCj4+Pj4+PiBv cGVuc3RhY2sgMTAuMjU1LjI1NS4xOCwgMTcyLjI5LjIzOS45ICAgICAgICAgICAgICAgICAgICAt DQo+Pj4+Pj4gYWlvMV9ub3ZhX2NvbnNvbGVfY29udGFpbmVyLTBmYjg5OTVjICAgICAgICBSVU5O SU5HIDEgICAgICAgICBvbmJvb3QsDQo+Pj4+Pj4gb3BlbnN0YWNrIDEwLjI1NS4yNTUuNDcsIDE3 Mi4yOS4yMzcuMTI5ICAgICAgICAgICAgICAgICAgLQ0KPj4+Pj4+IGFpbzFfbm92YV9zY2hlZHVs ZXJfY29udGFpbmVyLThmN2E2NTdhICAgICAgUlVOTklORyAxICAgICAgICAgb25ib290LA0KPj4+ Pj4+IG9wZW5zdGFjayAxMC4yNTUuMjU1LjE5NSwgMTcyLjI5LjIzOC4xMTMgICAgICAgICAgICAg ICAgIC0NCj4+Pj4+PiBhaW8xX3JhYmJpdF9tcV9jb250YWluZXItYzM0NTBkNjYgICAgICAgICAg IFJVTk5JTkcgMSAgICAgICAgIG9uYm9vdCwNCj4+Pj4+PiBvcGVuc3RhY2sgMTAuMjU1LjI1NS4x MTEsIDE3Mi4yOS4yMzcuMjAyICAgICAgICAgICAgICAgICAtDQo+Pj4+Pj4gYWlvMV9yZXBvX2Nv bnRhaW5lci04ZTA3ZmRlZiAgICAgICAgICAgICAgICBSVU5OSU5HIDEgICAgICAgICBvbmJvb3Qs DQo+Pj4+Pj4gb3BlbnN0YWNrIDEwLjI1NS4yNTUuMTQxLCAxNzIuMjkuMjM5Ljc5ICAgICAgICAg ICAgICAgICAgLQ0KPj4+Pj4+IGFpbzFfcnN5c2xvZ19jb250YWluZXItYjE5OGZiZTUgICAgICAg ICAgICAgUlVOTklORyAxICAgICAgICAgb25ib290LA0KPj4+Pj4+IG9wZW5zdGFjayAxMC4yNTUu MjU1LjEzLCAxNzIuMjkuMjM2LjE5NSAgICAgICAgICAgICAgICAgIC0NCj4+Pj4+PiBhaW8xX3N3 aWZ0X3Byb3h5X2NvbnRhaW5lci0xYTM1MzZlMSAgICAgICAgIFJVTk5JTkcgMSAgICAgICAgIG9u Ym9vdCwNCj4+Pj4+PiBvcGVuc3RhY2sgMTAuMjU1LjI1NS4xMDgsIDE3Mi4yOS4yMzcuMzEsIDE3 Mi4yOS4yNDQuMjQ4ICAtDQo+Pj4+Pj4gYWlvMV91dGlsaXR5X2NvbnRhaW5lci1iZDEwNmYxMSAg ICAgICAgICAgICBSVU5OSU5HIDEgICAgICAgICBvbmJvb3QsDQo+Pj4+Pj4gb3BlbnN0YWNrIDEw LjI1NS4yNTUuNTQsIDE3Mi4yOS4yMzkuMTI0ICAgICAgICAgICAgICAgICAgLQ0KPj4+Pj4+IFty b290QGFpbyB+XSMgbHhjLWENCj4+Pj4+PiBseGMtYXR0YWNoICAgICBseGMtYXV0b3N0YXJ0DQo+ Pj4+Pj4gW3Jvb3RAYWlvIH5dIyBseGMtYXR0YWNoIC1uIGFpbzFfdXRpbGl0eV9jb250YWluZXIt YmQxMDZmMTENCj4+Pj4+PiBbcm9vdEBhaW8xLXV0aWxpdHktY29udGFpbmVyLWJkMTA2ZjExIH5d Iw0KPj4+Pj4+IFtyb290QGFpbzEtdXRpbGl0eS1jb250YWluZXItYmQxMDZmMTEgfl0jIHNvdXJj ZSAvcm9vdC9vcGVucmMNCj4+Pj4+PiBbcm9vdEBhaW8xLXV0aWxpdHktY29udGFpbmVyLWJkMTA2 ZjExIH5dIyBvcGVuc3RhY2sNCj4+Pj4+PiBvcGVuc3RhY2sgICAgICAgICAgICAgICAgICAgICAg ICAgb3BlbnN0YWNrLWhvc3QtaG9zdGZpbGUtc2V0dXAuc2gNCj4+Pj4+PiBbcm9vdEBhaW8xLXV0 aWxpdHktY29udGFpbmVyLWJkMTA2ZjExIH5dIyBvcGVuc3RhY2sNCj4+Pj4+PiBvcGVuc3RhY2sg ICAgICAgICAgICAgICAgICAgICAgICAgb3BlbnN0YWNrLWhvc3QtaG9zdGZpbGUtc2V0dXAuc2gN Cj4+Pj4+PiBbcm9vdEBhaW8xLXV0aWxpdHktY29udGFpbmVyLWJkMTA2ZjExIH5dIyBvcGVuc3Rh Y2sgdXNlciBsaXN0DQo+Pj4+Pj4gRmFpbGVkIHRvIGRpc2NvdmVyIGF2YWlsYWJsZSBpZGVudGl0 eSB2ZXJzaW9ucyB3aGVuIGNvbnRhY3RpbmcNCj4+Pj4+PiBodHRwOi8vMTcyLjI5LjIzNi4xMDA6 NTAwMC92My4gQXR0ZW1wdGluZyB0byBwYXJzZSB2ZXJzaW9uIGZyb20gVVJMLg0KPj4+Pj4+IFNl cnZpY2UgVW5hdmFpbGFibGUgKEhUVFAgNTAzKQ0KPj4+Pj4+IFtyb290QGFpbzEtdXRpbGl0eS1j b250YWluZXItYmQxMDZmMTEgfl0jDQo+Pj4+Pj4gDQo+Pj4+Pj4gDQo+Pj4+Pj4gbm90IHN1cmUg d2hhdCBpcyB0aGlzIGVycm9yID8NCj4+Pj4+PiANCj4+Pj4+PiANCj4+Pj4+PiBPbiBTYXQsIEZl YiAzLCAyMDE4IGF0IDc6MjkgUE0sIFNhdGlzaCBQYXRlbCA8c2F0aXNoLnR4dEBnbWFpbC5jb20+ DQo+Pj4+Pj4gd3JvdGU6DQo+Pj4+Pj4+IEkgaGF2ZSB0aXJlZCBldmVyeXRoaW5nIGJ1dCBkaWRu J3QgYWJsZSB0byBmaW5kIHNvbHV0aW9uIDooICB3aGF0IGkgYW0NCj4+Pj4+Pj4gZG9pbmcgd3Jv bmcgaGVyZSwgaSBhbSBmb2xsb3dpbmcgdGhpcyBpbnN0cnVjdGlvbiBhbmQgcGxlYXNlIGxldCBt ZQ0KPj4+Pj4+PiBrbm93IGlmIGkgYW0gd3JvbmcNCj4+Pj4+Pj4gDQo+Pj4+Pj4+IA0KPj4+Pj4+ PiBodHRwczovL2RldmVsb3Blci5yYWNrc3BhY2UuY29tL2Jsb2cvbGlmZS13aXRob3V0LWRldnN0 YWNrLW9wZW5zdGFjay1kZXZlbG9wbWVudC13aXRoLW9zYS8NCj4+Pj4+Pj4gDQo+Pj4+Pj4+IEkg aGF2ZSBDZW50T1M3LCB3aXRoIDggQ1BVIGFuZCAxNkdCIG1lbW9yeSB3aXRoIDEwMEdCIGRpc2sg c2l6ZS4NCj4+Pj4+Pj4gDQo+Pj4+Pj4+IEVycm9yOiBodHRwOi8vcGFzdGUub3BlbnN0YWNrLm9y Zy9zaG93LzY2MDQ5Ny8NCj4+Pj4+Pj4gDQo+Pj4+Pj4+IA0KPj4+Pj4+PiBJIGhhdmUgdGlyZWQg Z2F0ZS1jaGVjay1jb21taXQuc2ggYnV0IHNhbWUgZXJyb3IgOigNCj4+Pj4+Pj4gDQo+Pj4+Pj4+ IA0KPj4+Pj4+PiANCj4+Pj4+Pj4gT24gU2F0LCBGZWIgMywgMjAxOCBhdCAxOjExIEFNLCBTYXRp c2ggUGF0ZWwgPHNhdGlzaC50eHRAZ21haWwuY29tPg0KPj4+Pj4+PiB3cm90ZToNCj4+Pj4+Pj4+ IEkgaGF2ZSBzdGFydGVkIHBsYXlpbmcgd2l0aCBvcGVuc3RhY2stYW5zaWJsZSBvbiBDZW50T1M3 IGFuZCB0cnlpbmcgdG8NCj4+Pj4+Pj4+IGluc3RhbGwgQWxsLWluLW9uZSBidXQgZ290IHRoaXMg ZXJyb3IgYW5kIG5vdCBzdXJlIHdoYXQgY2F1c2UgdGhhdA0KPj4+Pj4+Pj4gZXJyb3IgaG93IGRv IGkgdHJvdWJsZXNob290IGl0Pw0KPj4+Pj4+Pj4gDQo+Pj4+Pj4+PiANCj4+Pj4+Pj4+IFRBU0sg W2Jvb3RzdHJhcC1ob3N0IDogUmVtb3ZlIGFuIGV4aXN0aW5nIHByaXZhdGUvcHVibGljIHNzaCBr ZXlzIGlmDQo+Pj4+Pj4+PiBvbmUgaXMgbWlzc2luZ10NCj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4gKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqDQo+Pj4+Pj4+PiBza2lwcGluZzogW2xvY2FsaG9zdF0gPT4gKGl0ZW09aWRf cnNhKQ0KPj4+Pj4+Pj4gc2tpcHBpbmc6IFtsb2NhbGhvc3RdID0+IChpdGVtPWlkX3JzYS5wdWIp DQo+Pj4+Pj4+PiANCj4+Pj4+Pj4+IFRBU0sgW2Jvb3RzdHJhcC1ob3N0IDogQ3JlYXRlIHNzaCBr ZXkgcGFpciBmb3Igcm9vdF0NCj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4gKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioNCj4+Pj4+Pj4+IG9rOiBbbG9jYWxob3N0XQ0K Pj4+Pj4+Pj4gDQo+Pj4+Pj4+PiBUQVNLIFtib290c3RyYXAtaG9zdCA6IEZldGNoIHRoZSBnZW5l cmF0ZWQgcHVibGljIHNzaCBrZXldDQo+Pj4+Pj4+PiANCj4+Pj4+Pj4+ICoqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqDQo+Pj4+Pj4+PiBjaGFuZ2VkOiBbbG9jYWxob3N0 XQ0KPj4+Pj4+Pj4gDQo+Pj4+Pj4+PiBUQVNLIFtib290c3RyYXAtaG9zdCA6IEVuc3VyZSByb290 J3MgbmV3IHB1YmxpYyBzc2gga2V5IGlzIGluDQo+Pj4+Pj4+PiBhdXRob3JpemVkX2tleXNdDQo+ Pj4+Pj4+PiANCj4+Pj4+Pj4+ICoqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKg0KPj4+Pj4+Pj4gb2s6IFts b2NhbGhvc3RdDQo+Pj4+Pj4+PiANCj4+Pj4+Pj4+IFRBU0sgW2Jvb3RzdHJhcC1ob3N0IDogQ3Jl YXRlIHRoZSByZXF1aXJlZCBkZXBsb3ltZW50IGRpcmVjdG9yaWVzXQ0KPj4+Pj4+Pj4gDQo+Pj4+ Pj4+PiAqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioNCj4+Pj4+Pj4+IGNoYW5nZWQ6 IFtsb2NhbGhvc3RdID0+IChpdGVtPS9ldGMvb3BlbnN0YWNrX2RlcGxveSkNCj4+Pj4+Pj4+IGNo YW5nZWQ6IFtsb2NhbGhvc3RdID0+IChpdGVtPS9ldGMvb3BlbnN0YWNrX2RlcGxveS9jb25mLmQp DQo+Pj4+Pj4+PiBjaGFuZ2VkOiBbbG9jYWxob3N0XSA9PiAoaXRlbT0vZXRjL29wZW5zdGFja19k ZXBsb3kvZW52LmQpDQo+Pj4+Pj4+PiANCj4+Pj4+Pj4+IFRBU0sgW2Jvb3RzdHJhcC1ob3N0IDog RGVwbG95IHVzZXIgY29uZi5kIGNvbmZpZ3VyYXRpb25dDQo+Pj4+Pj4+PiANCj4+Pj4+Pj4+ICoq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioNCj4+Pj4+Pj4+IGZhdGFs OiBbbG9jYWxob3N0XTogRkFJTEVEISA9PiB7Im1zZyI6ICJ7ew0KPj4+Pj4+Pj4gY29uZmRfb3Zl cnJpZGVzW2Jvb3RzdHJhcF9ob3N0X3NjZW5hcmlvXSB9fTogJ2RpY3Qgb2JqZWN0JyBoYXMgbm8N Cj4+Pj4+Pj4+IGF0dHJpYnV0ZSB1J2FpbycifQ0KPj4+Pj4+Pj4gDQo+Pj4+Pj4+PiBSVU5OSU5H IEhBTkRMRVIgW3NzaGQgOiBSZWxvYWQgdGhlIFNTSCBzZXJ2aWNlXQ0KPj4+Pj4+Pj4gDQo+Pj4+ Pj4+PiAqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq DQo+Pj4+Pj4+PiAgICAgICB0byByZXRyeSwgdXNlOiAtLWxpbWl0DQo+Pj4+Pj4+PiBAL29wdC9v cGVuc3RhY2stYW5zaWJsZS90ZXN0cy9ib290c3RyYXAtYWlvLnJldHJ5DQo+Pj4+Pj4+PiANCj4+ Pj4+Pj4+IFBMQVkgUkVDQVANCj4+Pj4+Pj4+ICoqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqDQo+Pj4+Pj4+PiBsb2NhbGhvc3QgICAgICAgICAgICAgICAgICA6IG9rPTYxICAgY2hhbmdl ZD0zNiAgIHVucmVhY2hhYmxlPTANCj4+Pj4+Pj4+IGZhaWxlZD0yDQo+Pj4+Pj4+PiANCj4+Pj4+ Pj4+IFtyb290QGFpbyBvcGVuc3RhY2stYW5zaWJsZV0jDQo+Pj4+PiANCj4+Pj4+IF9fX19fX19f X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fDQo+Pj4+PiBNYWlsaW5nIGxp c3Q6DQo+Pj4+PiBodHRwOi8vbGlzdHMub3BlbnN0YWNrLm9yZy9jZ2ktYmluL21haWxtYW4vbGlz dGluZm8vb3BlbnN0YWNrDQo+Pj4+PiBQb3N0IHRvICAgICA6IG9wZW5zdGFja0BsaXN0cy5vcGVu c3RhY2sub3JnDQo+Pj4+PiBVbnN1YnNjcmliZSA6DQo+Pj4+PiBodHRwOi8vbGlzdHMub3BlbnN0 YWNrLm9yZy9jZ2ktYmluL21haWxtYW4vbGlzdGluZm8vb3BlbnN0YWNrDQo+Pj4+IA0KPj4+PiAN Cj4+PiANCj4+PiBfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f Xw0KPj4+IE1haWxpbmcgbGlzdDogaHR0cDovL2xpc3RzLm9wZW5zdGFjay5vcmcvY2dpLWJpbi9t YWlsbWFuL2xpc3RpbmZvL29wZW5zdGFjaw0KPj4+IFBvc3QgdG8gICAgIDogb3BlbnN0YWNrQGxp c3RzLm9wZW5zdGFjay5vcmcNCj4+PiBVbnN1YnNjcmliZSA6IGh0dHA6Ly9saXN0cy5vcGVuc3Rh Y2sub3JnL2NnaS1iaW4vbWFpbG1hbi9saXN0aW5mby9vcGVuc3RhY2sNCg== --=_6a49388620fb8de39e6597971edffd31-- From satish.txt at gmail.com Sun Feb 4 19:14:28 2018 From: satish.txt at gmail.com (Satish Patel) Date: Sun, 4 Feb 2018 14:14:28 -0500 Subject: [Openstack] openstack-ansible aio error In-Reply-To: <1C807435-61CA-4A09-AD82-C6F3B91CEC03@italy1.com> References: <2D5C1208-BA0A-491D-B232-6B0AB3EBC8CF@italy1.com> <1C807435-61CA-4A09-AD82-C6F3B91CEC03@italy1.com> Message-ID: :) I am going to try openstack-ansible and and if i am lucky i will continue and plan to deploy on production but if it will take too much my time to debug then i would go with tripleO which seems less complicated so far. As you said openstack-ansible has good ubuntu community and we are 100% CentOS shop and i want something which we are comfortable and supported by deployment tool. My first production cluster is 20 node but it may slowly grow if all goes well. On Sun, Feb 4, 2018 at 2:05 PM, wrote: > Tripleo = ha > Packstack = no ha > >> Il giorno 04 feb 2018, alle ore 11:00, Satish Patel ha scritto: >> >> Just wondering why did you say we can't do HA with TripleO? I thought >> it does support HA. am i missing something here? >> >>> On Sun, Feb 4, 2018 at 11:21 AM, wrote: >>> What are you looking for ha? Etc. Tripleo is the way to go for that packstack if you want simple deployment but no ha of course. >>> >>>> Il giorno 04 feb 2018, alle ore 07:53, Satish Patel ha scritto: >>>> >>>> Hi Marcin, >>>> >>>> Thank you, i will try other link, also i am using CentOS7 but anyway >>>> now question is does openstack-ansible ready for production deployment >>>> despite galera issues and bug? >>>> >>>> If i want to go on production should i wait or find other tools to >>>> deploy on production? >>>> >>>>> On Sun, Feb 4, 2018 at 5:29 AM, Marcin Dulak wrote: >>>>> When playing with openstack-ansible do it in a virtual setup (e.g. nested >>>>> virtualization with libvirt) so you can reproducibly bring up your >>>>> environment from scratch. >>>>> You will have to do it multiple times. >>>>> >>>>> https://developer.rackspace.com/blog/life-without-devstack-openstack-development-with-osa/ >>>>> is more than 2 years old. >>>>> >>>>> Try to follow >>>>> https://docs.openstack.org/openstack-ansible/latest/contributor/quickstart-aio.html >>>>> but git clone the latest state of the openstack-ansible repo. >>>>> The above page has a link that can be used to submit bugs directly to the >>>>> openstack-ansible project at launchpad. >>>>> In this way you may be able to cleanup/improve the documentation, >>>>> and since your setup is the simplest possible one your bug reports may get >>>>> noticed and reproduced by the developers. >>>>> What happens is that most people try openstack-ansible, don't report bugs, >>>>> or report the bugs without the information neccesary >>>>> to reproduce them, and abandon the whole idea. >>>>> >>>>> Try to search >>>>> https://bugs.launchpad.net/openstack-ansible/+bugs?field.searchtext=galera >>>>> for inspiration about what to do. >>>>> Currently the galera setup in openstack-ansible, especially on centos7 seems >>>>> to be undergoing some critical changes. >>>>> Enter the galera container: >>>>> lxc-attach -n aio1_galera_container-4f488f6a >>>>> look around it, check whether mysqld is running etc., try to identify which >>>>> ansible tasks failed and run them manually inside of the container. >>>>> >>>>> Marcin >>>>> >>>>> >>>>>> On Sun, Feb 4, 2018 at 3:41 AM, Satish Patel wrote: >>>>>> >>>>>> I have noticed in output "aio1_galera_container" is failed, how do i >>>>>> fixed this kind of issue? >>>>>> >>>>>> >>>>>> >>>>>> PLAY RECAP >>>>>> ************************************************************************************************************************************************************************** >>>>>> aio1 : ok=41 changed=4 unreachable=0 >>>>>> failed=0 >>>>>> aio1_cinder_api_container-2af4dd01 : ok=0 changed=0 >>>>>> unreachable=0 failed=0 >>>>>> aio1_cinder_scheduler_container-454db1fb : ok=0 changed=0 >>>>>> unreachable=0 failed=0 >>>>>> aio1_designate_container-f7ea3f73 : ok=0 changed=0 unreachable=0 >>>>>> failed=0 >>>>>> aio1_galera_container-4f488f6a : ok=32 changed=3 unreachable=0 >>>>>> failed=1 >>>>>> >>>>>>> On Sat, Feb 3, 2018 at 9:26 PM, Satish Patel wrote: >>>>>>> I have re-install centos7 and give it a try and got this error >>>>>>> >>>>>>> DEBUG MESSAGE RECAP >>>>>>> ************************************************************ >>>>>>> DEBUG: [Load local packages] >>>>>>> *************************************************** >>>>>>> All items completed >>>>>>> >>>>>>> Saturday 03 February 2018 21:04:07 -0500 (0:00:04.175) >>>>>>> 0:16:17.204 ***** >>>>>>> >>>>>>> =============================================================================== >>>>>>> repo_build : Create OpenStack-Ansible requirement wheels -------------- >>>>>>> 268.16s >>>>>>> repo_build : Wait for the venvs builds to complete -------------------- >>>>>>> 110.30s >>>>>>> repo_build : Install packages ------------------------------------------ >>>>>>> 68.26s >>>>>>> repo_build : Clone git repositories asynchronously --------------------- >>>>>>> 59.85s >>>>>>> pip_install : Install distro packages ---------------------------------- >>>>>>> 36.72s >>>>>>> galera_client : Install galera distro packages ------------------------- >>>>>>> 33.21s >>>>>>> haproxy_server : Create haproxy service config files ------------------- >>>>>>> 30.81s >>>>>>> repo_build : Execute the venv build scripts asynchonously -------------- >>>>>>> 29.69s >>>>>>> pip_install : Install distro packages ---------------------------------- >>>>>>> 23.56s >>>>>>> repo_server : Install repo server packages ----------------------------- >>>>>>> 20.11s >>>>>>> memcached_server : Install distro packages ----------------------------- >>>>>>> 16.35s >>>>>>> repo_build : Create venv build options files --------------------------- >>>>>>> 14.57s >>>>>>> haproxy_server : Install HAProxy Packages >>>>>>> ------------------------------- 8.35s >>>>>>> rsyslog_client : Install rsyslog packages >>>>>>> ------------------------------- 8.33s >>>>>>> rsyslog_client : Install rsyslog packages >>>>>>> ------------------------------- 7.64s >>>>>>> rsyslog_client : Install rsyslog packages >>>>>>> ------------------------------- 7.42s >>>>>>> repo_build : Wait for git clones to complete >>>>>>> ---------------------------- 7.25s >>>>>>> repo_server : Install repo caching server packages >>>>>>> ---------------------- 4.76s >>>>>>> galera_server : Check that WSREP is ready >>>>>>> ------------------------------- 4.18s >>>>>>> repo_server : Git service data folder setup >>>>>>> ----------------------------- 4.04s >>>>>>> ++ exit_fail 341 0 >>>>>>> ++ set +x >>>>>>> ++ info_block 'Error Info - 341' 0 >>>>>>> ++ echo >>>>>>> ---------------------------------------------------------------------- >>>>>>> ---------------------------------------------------------------------- >>>>>>> ++ print_info 'Error Info - 341' 0 >>>>>>> ++ PROC_NAME='- [ Error Info - 341 0 ] -' >>>>>>> ++ printf '\n%s%s\n' '- [ Error Info - 341 0 ] -' >>>>>>> -------------------------------------------- >>>>>>> >>>>>>> - [ Error Info - 341 0 ] --------------------------------------------- >>>>>>> ++ echo >>>>>>> ---------------------------------------------------------------------- >>>>>>> ---------------------------------------------------------------------- >>>>>>> ++ exit_state 1 >>>>>>> ++ set +x >>>>>>> ---------------------------------------------------------------------- >>>>>>> >>>>>>> - [ Run Time = 2030 seconds || 33 minutes ] -------------------------- >>>>>>> ---------------------------------------------------------------------- >>>>>>> ---------------------------------------------------------------------- >>>>>>> >>>>>>> - [ Status: Failure ] ------------------------------------------------ >>>>>>> ---------------------------------------------------------------------- >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> I don't know why it failed >>>>>>> >>>>>>> but i tried following: >>>>>>> >>>>>>> [root at aio ~]# lxc-ls -f >>>>>>> NAME STATE AUTOSTART GROUPS >>>>>>> IPV4 IPV6 >>>>>>> aio1_cinder_api_container-2af4dd01 RUNNING 1 onboot, >>>>>>> openstack 10.255.255.62, 172.29.238.210, 172.29.244.152 - >>>>>>> aio1_cinder_scheduler_container-454db1fb RUNNING 1 onboot, >>>>>>> openstack 10.255.255.117, 172.29.239.172 - >>>>>>> aio1_designate_container-f7ea3f73 RUNNING 1 onboot, >>>>>>> openstack 10.255.255.235, 172.29.239.166 - >>>>>>> aio1_galera_container-4f488f6a RUNNING 1 onboot, >>>>>>> openstack 10.255.255.193, 172.29.236.69 - >>>>>>> aio1_glance_container-f8caa9e6 RUNNING 1 onboot, >>>>>>> openstack 10.255.255.225, 172.29.239.52, 172.29.246.25 - >>>>>>> aio1_heat_api_container-8321a763 RUNNING 1 onboot, >>>>>>> openstack 10.255.255.104, 172.29.236.186 - >>>>>>> aio1_heat_apis_container-3f70ad74 RUNNING 1 onboot, >>>>>>> openstack 10.255.255.166, 172.29.239.13 - >>>>>>> aio1_heat_engine_container-a18e5a0a RUNNING 1 onboot, >>>>>>> openstack 10.255.255.118, 172.29.238.7 - >>>>>>> aio1_horizon_container-e493275c RUNNING 1 onboot, >>>>>>> openstack 10.255.255.98, 172.29.237.43 - >>>>>>> aio1_keystone_container-c0e23e14 RUNNING 1 onboot, >>>>>>> openstack 10.255.255.60, 172.29.237.165 - >>>>>>> aio1_memcached_container-ef8fed4c RUNNING 1 onboot, >>>>>>> openstack 10.255.255.214, 172.29.238.211 - >>>>>>> aio1_neutron_agents_container-131e996e RUNNING 1 onboot, >>>>>>> openstack 10.255.255.153, 172.29.237.246, 172.29.243.227 - >>>>>>> aio1_neutron_server_container-ccd69394 RUNNING 1 onboot, >>>>>>> openstack 10.255.255.27, 172.29.236.129 - >>>>>>> aio1_nova_api_container-73274024 RUNNING 1 onboot, >>>>>>> openstack 10.255.255.42, 172.29.238.201 - >>>>>>> aio1_nova_api_metadata_container-a1d32282 RUNNING 1 onboot, >>>>>>> openstack 10.255.255.218, 172.29.238.153 - >>>>>>> aio1_nova_api_os_compute_container-52725940 RUNNING 1 onboot, >>>>>>> openstack 10.255.255.109, 172.29.236.126 - >>>>>>> aio1_nova_api_placement_container-058e8031 RUNNING 1 onboot, >>>>>>> openstack 10.255.255.29, 172.29.236.157 - >>>>>>> aio1_nova_conductor_container-9b6b208c RUNNING 1 onboot, >>>>>>> openstack 10.255.255.18, 172.29.239.9 - >>>>>>> aio1_nova_console_container-0fb8995c RUNNING 1 onboot, >>>>>>> openstack 10.255.255.47, 172.29.237.129 - >>>>>>> aio1_nova_scheduler_container-8f7a657a RUNNING 1 onboot, >>>>>>> openstack 10.255.255.195, 172.29.238.113 - >>>>>>> aio1_rabbit_mq_container-c3450d66 RUNNING 1 onboot, >>>>>>> openstack 10.255.255.111, 172.29.237.202 - >>>>>>> aio1_repo_container-8e07fdef RUNNING 1 onboot, >>>>>>> openstack 10.255.255.141, 172.29.239.79 - >>>>>>> aio1_rsyslog_container-b198fbe5 RUNNING 1 onboot, >>>>>>> openstack 10.255.255.13, 172.29.236.195 - >>>>>>> aio1_swift_proxy_container-1a3536e1 RUNNING 1 onboot, >>>>>>> openstack 10.255.255.108, 172.29.237.31, 172.29.244.248 - >>>>>>> aio1_utility_container-bd106f11 RUNNING 1 onboot, >>>>>>> openstack 10.255.255.54, 172.29.239.124 - >>>>>>> [root at aio ~]# lxc-a >>>>>>> lxc-attach lxc-autostart >>>>>>> [root at aio ~]# lxc-attach -n aio1_utility_container-bd106f11 >>>>>>> [root at aio1-utility-container-bd106f11 ~]# >>>>>>> [root at aio1-utility-container-bd106f11 ~]# source /root/openrc >>>>>>> [root at aio1-utility-container-bd106f11 ~]# openstack >>>>>>> openstack openstack-host-hostfile-setup.sh >>>>>>> [root at aio1-utility-container-bd106f11 ~]# openstack >>>>>>> openstack openstack-host-hostfile-setup.sh >>>>>>> [root at aio1-utility-container-bd106f11 ~]# openstack user list >>>>>>> Failed to discover available identity versions when contacting >>>>>>> http://172.29.236.100:5000/v3. Attempting to parse version from URL. >>>>>>> Service Unavailable (HTTP 503) >>>>>>> [root at aio1-utility-container-bd106f11 ~]# >>>>>>> >>>>>>> >>>>>>> not sure what is this error ? >>>>>>> >>>>>>> >>>>>>> On Sat, Feb 3, 2018 at 7:29 PM, Satish Patel >>>>>>> wrote: >>>>>>>> I have tired everything but didn't able to find solution :( what i am >>>>>>>> doing wrong here, i am following this instruction and please let me >>>>>>>> know if i am wrong >>>>>>>> >>>>>>>> >>>>>>>> https://developer.rackspace.com/blog/life-without-devstack-openstack-development-with-osa/ >>>>>>>> >>>>>>>> I have CentOS7, with 8 CPU and 16GB memory with 100GB disk size. >>>>>>>> >>>>>>>> Error: http://paste.openstack.org/show/660497/ >>>>>>>> >>>>>>>> >>>>>>>> I have tired gate-check-commit.sh but same error :( >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> On Sat, Feb 3, 2018 at 1:11 AM, Satish Patel >>>>>>>> wrote: >>>>>>>>> I have started playing with openstack-ansible on CentOS7 and trying to >>>>>>>>> install All-in-one but got this error and not sure what cause that >>>>>>>>> error how do i troubleshoot it? >>>>>>>>> >>>>>>>>> >>>>>>>>> TASK [bootstrap-host : Remove an existing private/public ssh keys if >>>>>>>>> one is missing] >>>>>>>>> >>>>>>>>> ************************************************************************ >>>>>>>>> skipping: [localhost] => (item=id_rsa) >>>>>>>>> skipping: [localhost] => (item=id_rsa.pub) >>>>>>>>> >>>>>>>>> TASK [bootstrap-host : Create ssh key pair for root] >>>>>>>>> >>>>>>>>> ******************************************************************************************************** >>>>>>>>> ok: [localhost] >>>>>>>>> >>>>>>>>> TASK [bootstrap-host : Fetch the generated public ssh key] >>>>>>>>> >>>>>>>>> ************************************************************************************************** >>>>>>>>> changed: [localhost] >>>>>>>>> >>>>>>>>> TASK [bootstrap-host : Ensure root's new public ssh key is in >>>>>>>>> authorized_keys] >>>>>>>>> >>>>>>>>> ****************************************************************************** >>>>>>>>> ok: [localhost] >>>>>>>>> >>>>>>>>> TASK [bootstrap-host : Create the required deployment directories] >>>>>>>>> >>>>>>>>> ****************************************************************************************** >>>>>>>>> changed: [localhost] => (item=/etc/openstack_deploy) >>>>>>>>> changed: [localhost] => (item=/etc/openstack_deploy/conf.d) >>>>>>>>> changed: [localhost] => (item=/etc/openstack_deploy/env.d) >>>>>>>>> >>>>>>>>> TASK [bootstrap-host : Deploy user conf.d configuration] >>>>>>>>> >>>>>>>>> **************************************************************************************************** >>>>>>>>> fatal: [localhost]: FAILED! => {"msg": "{{ >>>>>>>>> confd_overrides[bootstrap_host_scenario] }}: 'dict object' has no >>>>>>>>> attribute u'aio'"} >>>>>>>>> >>>>>>>>> RUNNING HANDLER [sshd : Reload the SSH service] >>>>>>>>> >>>>>>>>> ************************************************************************************************************* >>>>>>>>> to retry, use: --limit >>>>>>>>> @/opt/openstack-ansible/tests/bootstrap-aio.retry >>>>>>>>> >>>>>>>>> PLAY RECAP >>>>>>>>> ************************************************************************************************************************************************** >>>>>>>>> localhost : ok=61 changed=36 unreachable=0 >>>>>>>>> failed=2 >>>>>>>>> >>>>>>>>> [root at aio openstack-ansible]# >>>>>> >>>>>> _______________________________________________ >>>>>> Mailing list: >>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >>>>>> Post to : openstack at lists.openstack.org >>>>>> Unsubscribe : >>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >>>>> >>>>> >>>> >>>> _______________________________________________ >>>> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >>>> Post to : openstack at lists.openstack.org >>>> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack From remo at italy1.com Sun Feb 4 19:18:35 2018 From: remo at italy1.com (remo at italy1.com) Date: Sun, 4 Feb 2018 11:18:35 -0800 Subject: [Openstack] openstack-ansible aio error In-Reply-To: References: <2D5C1208-BA0A-491D-B232-6B0AB3EBC8CF@italy1.com> <1C807435-61CA-4A09-AD82-C6F3B91CEC03@italy1.com> Message-ID: <7CD76E10-3819-4C9E-BEA3-571F3FBACD6C@italy1.com> Content-Type: multipart/alternative; boundary="=_99d8f260a69d7936d3ced6f3be6bc645" --=_99d8f260a69d7936d3ced6f3be6bc645 Content-Transfer-Encoding: base64 Content-Type: text/plain; charset=utf-8 Tm90IHN1cmUgYWJvdXQgdGhhdCB0cmlwbGVvIGlzIHZlcnkgY29tcGxpY2F0ZWQgYW5kIHJlYWR5 IGZvciBwcm9kdWN0aW9uIHdoZXJlIGFuc2libGUgT3BlblN0YWNrIGlzIHByb2JhYmx5IG5vdC4g DQoNCklmIHlvdSB3YW50IHRvIGxlYXJuIHN1cmUgYnV0IGxldOKAmXMgbG9vayBhdCB0aGUgZmFj dHMgcHJvZHVjdGlvbiBuZWVkcyBzb21ldGhpbmcgbW9yZSB0aGFuIHdoYXQgYW5zaWJsZSBPcGVu U3RhY2sgY2FuIG5vdyBvZmZlci4gDQoNCj4gSWwgZ2lvcm5vIDA0IGZlYiAyMDE4LCBhbGxlIG9y ZSAxMToxNCwgU2F0aXNoIFBhdGVsIDxzYXRpc2gudHh0QGdtYWlsLmNvbT4gaGEgc2NyaXR0bzoN Cj4gDQo+IDopDQo+IA0KPiBJIGFtIGdvaW5nIHRvIHRyeSBvcGVuc3RhY2stYW5zaWJsZSBhbmQg YW5kIGlmIGkgYW0gbHVja3kgaSB3aWxsDQo+IGNvbnRpbnVlIGFuZCBwbGFuIHRvIGRlcGxveSBv biBwcm9kdWN0aW9uIGJ1dCBpZiBpdCB3aWxsIHRha2UgdG9vIG11Y2gNCj4gbXkgdGltZSB0byBk ZWJ1ZyB0aGVuIGkgd291bGQgZ28gd2l0aCB0cmlwbGVPIHdoaWNoIHNlZW1zIGxlc3MNCj4gY29t cGxpY2F0ZWQgc28gZmFyLg0KPiANCj4gQXMgeW91IHNhaWQgb3BlbnN0YWNrLWFuc2libGUgaGFz IGdvb2QgdWJ1bnR1IGNvbW11bml0eSBhbmQgd2UgYXJlDQo+IDEwMCUgQ2VudE9TIHNob3AgYW5k IGkgd2FudCBzb21ldGhpbmcgd2hpY2ggd2UgYXJlIGNvbWZvcnRhYmxlIGFuZA0KPiBzdXBwb3J0 ZWQgYnkgZGVwbG95bWVudCB0b29sLg0KPiANCj4gTXkgZmlyc3QgcHJvZHVjdGlvbiBjbHVzdGVy IGlzIDIwIG5vZGUgYnV0IGl0IG1heSBzbG93bHkgZ3JvdyBpZiBhbGwgZ29lcyB3ZWxsLg0KPiAN Cj4+IE9uIFN1biwgRmViIDQsIDIwMTggYXQgMjowNSBQTSwgIDxyZW1vQGl0YWx5MS5jb20+IHdy b3RlOg0KPj4gVHJpcGxlbyA9IGhhDQo+PiBQYWNrc3RhY2sgPSBubyBoYQ0KPj4gDQo+Pj4gSWwg Z2lvcm5vIDA0IGZlYiAyMDE4LCBhbGxlIG9yZSAxMTowMCwgU2F0aXNoIFBhdGVsIDxzYXRpc2gu dHh0QGdtYWlsLmNvbT4gaGEgc2NyaXR0bzoNCj4+PiANCj4+PiBKdXN0IHdvbmRlcmluZyB3aHkg ZGlkIHlvdSBzYXkgd2UgY2FuJ3QgZG8gSEEgd2l0aCBUcmlwbGVPPyAgSSB0aG91Z2h0DQo+Pj4g aXQgZG9lcyBzdXBwb3J0IEhBLiBhbSBpIG1pc3Npbmcgc29tZXRoaW5nIGhlcmU/DQo+Pj4gDQo+ Pj4+IE9uIFN1biwgRmViIDQsIDIwMTggYXQgMTE6MjEgQU0sICA8cmVtb0BpdGFseTEuY29tPiB3 cm90ZToNCj4+Pj4gV2hhdCBhcmUgeW91IGxvb2tpbmcgZm9yIGhhPyBFdGMuIFRyaXBsZW8gaXMg dGhlIHdheSB0byBnbyBmb3IgdGhhdCBwYWNrc3RhY2sgaWYgeW91IHdhbnQgc2ltcGxlIGRlcGxv eW1lbnQgYnV0IG5vIGhhIG9mIGNvdXJzZS4NCj4+Pj4gDQo+Pj4+PiBJbCBnaW9ybm8gMDQgZmVi IDIwMTgsIGFsbGUgb3JlIDA3OjUzLCBTYXRpc2ggUGF0ZWwgPHNhdGlzaC50eHRAZ21haWwuY29t PiBoYSBzY3JpdHRvOg0KPj4+Pj4gDQo+Pj4+PiBIaSBNYXJjaW4sDQo+Pj4+PiANCj4+Pj4+IFRo YW5rIHlvdSwgaSB3aWxsIHRyeSBvdGhlciBsaW5rLCBhbHNvIGkgYW0gdXNpbmcgQ2VudE9TNyBi dXQgYW55d2F5DQo+Pj4+PiBub3cgcXVlc3Rpb24gaXMgZG9lcyBvcGVuc3RhY2stYW5zaWJsZSBy ZWFkeSBmb3IgcHJvZHVjdGlvbiBkZXBsb3ltZW50DQo+Pj4+PiBkZXNwaXRlIGdhbGVyYSBpc3N1 ZXMgYW5kIGJ1Zz8NCj4+Pj4+IA0KPj4+Pj4gSWYgaSB3YW50IHRvIGdvIG9uIHByb2R1Y3Rpb24g c2hvdWxkIGkgd2FpdCBvciBmaW5kIG90aGVyIHRvb2xzIHRvDQo+Pj4+PiBkZXBsb3kgb24gcHJv ZHVjdGlvbj8NCj4+Pj4+IA0KPj4+Pj4+IE9uIFN1biwgRmViIDQsIDIwMTggYXQgNToyOSBBTSwg TWFyY2luIER1bGFrIDxtYXJjaW4uZHVsYWtAZ21haWwuY29tPiB3cm90ZToNCj4+Pj4+PiBXaGVu IHBsYXlpbmcgd2l0aCBvcGVuc3RhY2stYW5zaWJsZSBkbyBpdCBpbiBhIHZpcnR1YWwgc2V0dXAg KGUuZy4gbmVzdGVkDQo+Pj4+Pj4gdmlydHVhbGl6YXRpb24gd2l0aCBsaWJ2aXJ0KSBzbyB5b3Ug Y2FuIHJlcHJvZHVjaWJseSBicmluZyB1cCB5b3VyDQo+Pj4+Pj4gZW52aXJvbm1lbnQgZnJvbSBz Y3JhdGNoLg0KPj4+Pj4+IFlvdSB3aWxsIGhhdmUgdG8gZG8gaXQgbXVsdGlwbGUgdGltZXMuDQo+ Pj4+Pj4gDQo+Pj4+Pj4gaHR0cHM6Ly9kZXZlbG9wZXIucmFja3NwYWNlLmNvbS9ibG9nL2xpZmUt d2l0aG91dC1kZXZzdGFjay1vcGVuc3RhY2stZGV2ZWxvcG1lbnQtd2l0aC1vc2EvDQo+Pj4+Pj4g aXMgbW9yZSB0aGFuIDIgeWVhcnMgb2xkLg0KPj4+Pj4+IA0KPj4+Pj4+IFRyeSB0byBmb2xsb3cN Cj4+Pj4+PiBodHRwczovL2RvY3Mub3BlbnN0YWNrLm9yZy9vcGVuc3RhY2stYW5zaWJsZS9sYXRl c3QvY29udHJpYnV0b3IvcXVpY2tzdGFydC1haW8uaHRtbA0KPj4+Pj4+IGJ1dCBnaXQgY2xvbmUg dGhlIGxhdGVzdCBzdGF0ZSBvZiB0aGUgb3BlbnN0YWNrLWFuc2libGUgcmVwby4NCj4+Pj4+PiBU aGUgYWJvdmUgcGFnZSBoYXMgYSBsaW5rIHRoYXQgY2FuIGJlIHVzZWQgdG8gc3VibWl0IGJ1Z3Mg ZGlyZWN0bHkgdG8gdGhlDQo+Pj4+Pj4gb3BlbnN0YWNrLWFuc2libGUgcHJvamVjdCBhdCBsYXVu Y2hwYWQuDQo+Pj4+Pj4gSW4gdGhpcyB3YXkgeW91IG1heSBiZSBhYmxlIHRvIGNsZWFudXAvaW1w cm92ZSB0aGUgZG9jdW1lbnRhdGlvbiwNCj4+Pj4+PiBhbmQgc2luY2UgeW91ciBzZXR1cCBpcyB0 aGUgc2ltcGxlc3QgcG9zc2libGUgb25lIHlvdXIgYnVnIHJlcG9ydHMgbWF5IGdldA0KPj4+Pj4+ IG5vdGljZWQgYW5kIHJlcHJvZHVjZWQgYnkgdGhlIGRldmVsb3BlcnMuDQo+Pj4+Pj4gV2hhdCBo YXBwZW5zIGlzIHRoYXQgbW9zdCBwZW9wbGUgdHJ5IG9wZW5zdGFjay1hbnNpYmxlLCBkb24ndCBy ZXBvcnQgYnVncywNCj4+Pj4+PiBvciByZXBvcnQgdGhlIGJ1Z3Mgd2l0aG91dCB0aGUgaW5mb3Jt YXRpb24gbmVjY2VzYXJ5DQo+Pj4+Pj4gdG8gcmVwcm9kdWNlIHRoZW0sIGFuZCBhYmFuZG9uIHRo ZSB3aG9sZSBpZGVhLg0KPj4+Pj4+IA0KPj4+Pj4+IFRyeSB0byBzZWFyY2gNCj4+Pj4+PiBodHRw czovL2J1Z3MubGF1bmNocGFkLm5ldC9vcGVuc3RhY2stYW5zaWJsZS8rYnVncz9maWVsZC5zZWFy Y2h0ZXh0PWdhbGVyYQ0KPj4+Pj4+IGZvciBpbnNwaXJhdGlvbiBhYm91dCB3aGF0IHRvIGRvLg0K Pj4+Pj4+IEN1cnJlbnRseSB0aGUgZ2FsZXJhIHNldHVwIGluIG9wZW5zdGFjay1hbnNpYmxlLCBl c3BlY2lhbGx5IG9uIGNlbnRvczcgc2VlbXMNCj4+Pj4+PiB0byBiZSB1bmRlcmdvaW5nIHNvbWUg Y3JpdGljYWwgY2hhbmdlcy4NCj4+Pj4+PiBFbnRlciB0aGUgZ2FsZXJhIGNvbnRhaW5lcjoNCj4+ Pj4+PiBseGMtYXR0YWNoIC1uIGFpbzFfZ2FsZXJhX2NvbnRhaW5lci00ZjQ4OGY2YQ0KPj4+Pj4+ IGxvb2sgYXJvdW5kIGl0LCBjaGVjayB3aGV0aGVyIG15c3FsZCBpcyBydW5uaW5nIGV0Yy4sIHRy eSB0byBpZGVudGlmeSB3aGljaA0KPj4+Pj4+IGFuc2libGUgdGFza3MgZmFpbGVkIGFuZCBydW4g dGhlbSBtYW51YWxseSBpbnNpZGUgb2YgdGhlIGNvbnRhaW5lci4NCj4+Pj4+PiANCj4+Pj4+PiBN YXJjaW4NCj4+Pj4+PiANCj4+Pj4+PiANCj4+Pj4+Pj4gT24gU3VuLCBGZWIgNCwgMjAxOCBhdCAz OjQxIEFNLCBTYXRpc2ggUGF0ZWwgPHNhdGlzaC50eHRAZ21haWwuY29tPiB3cm90ZToNCj4+Pj4+ Pj4gDQo+Pj4+Pj4+IEkgaGF2ZSBub3RpY2VkIGluIG91dHB1dCAiYWlvMV9nYWxlcmFfY29udGFp bmVyIiBpcyBmYWlsZWQsIGhvdyBkbyBpDQo+Pj4+Pj4+IGZpeGVkIHRoaXMga2luZCBvZiBpc3N1 ZT8NCj4+Pj4+Pj4gDQo+Pj4+Pj4+IA0KPj4+Pj4+PiANCj4+Pj4+Pj4gUExBWSBSRUNBUA0KPj4+ Pj4+PiAqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKg0KPj4+Pj4+PiBhaW8xICAgICAgICAgICAgICAgICAgICAgICA6IG9rPTQxICAgY2hhbmdl ZD00ICAgIHVucmVhY2hhYmxlPTANCj4+Pj4+Pj4gZmFpbGVkPTANCj4+Pj4+Pj4gYWlvMV9jaW5k ZXJfYXBpX2NvbnRhaW5lci0yYWY0ZGQwMSA6IG9rPTAgICAgY2hhbmdlZD0wDQo+Pj4+Pj4+IHVu cmVhY2hhYmxlPTAgICAgZmFpbGVkPTANCj4+Pj4+Pj4gYWlvMV9jaW5kZXJfc2NoZWR1bGVyX2Nv bnRhaW5lci00NTRkYjFmYiA6IG9rPTAgICAgY2hhbmdlZD0wDQo+Pj4+Pj4+IHVucmVhY2hhYmxl PTAgICAgZmFpbGVkPTANCj4+Pj4+Pj4gYWlvMV9kZXNpZ25hdGVfY29udGFpbmVyLWY3ZWEzZjcz IDogb2s9MCAgICBjaGFuZ2VkPTAgICAgdW5yZWFjaGFibGU9MA0KPj4+Pj4+PiBmYWlsZWQ9MA0K Pj4+Pj4+PiBhaW8xX2dhbGVyYV9jb250YWluZXItNGY0ODhmNmEgOiBvaz0zMiAgIGNoYW5nZWQ9 MyAgICB1bnJlYWNoYWJsZT0wDQo+Pj4+Pj4+IGZhaWxlZD0xDQo+Pj4+Pj4+IA0KPj4+Pj4+Pj4g T24gU2F0LCBGZWIgMywgMjAxOCBhdCA5OjI2IFBNLCBTYXRpc2ggUGF0ZWwgPHNhdGlzaC50eHRA Z21haWwuY29tPiB3cm90ZToNCj4+Pj4+Pj4+IEkgaGF2ZSByZS1pbnN0YWxsIGNlbnRvczcgYW5k IGdpdmUgaXQgYSB0cnkgYW5kIGdvdCB0aGlzIGVycm9yDQo+Pj4+Pj4+PiANCj4+Pj4+Pj4+IERF QlVHIE1FU1NBR0UgUkVDQVANCj4+Pj4+Pj4+ICoqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKg0KPj4+Pj4+Pj4gREVCVUc6IFtMb2FkIGxv Y2FsIHBhY2thZ2VzXQ0KPj4+Pj4+Pj4gKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqDQo+Pj4+Pj4+PiBBbGwgaXRlbXMgY29tcGxldGVkDQo+Pj4+Pj4+ PiANCj4+Pj4+Pj4+IFNhdHVyZGF5IDAzIEZlYnJ1YXJ5IDIwMTggIDIxOjA0OjA3IC0wNTAwICgw OjAwOjA0LjE3NSkNCj4+Pj4+Pj4+IDA6MTY6MTcuMjA0ICoqKioqDQo+Pj4+Pj4+PiANCj4+Pj4+ Pj4+ID09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT0NCj4+Pj4+Pj4+IHJlcG9fYnVpbGQgOiBDcmVhdGUg T3BlblN0YWNrLUFuc2libGUgcmVxdWlyZW1lbnQgd2hlZWxzIC0tLS0tLS0tLS0tLS0tDQo+Pj4+ Pj4+PiAyNjguMTZzDQo+Pj4+Pj4+PiByZXBvX2J1aWxkIDogV2FpdCBmb3IgdGhlIHZlbnZzIGJ1 aWxkcyB0byBjb21wbGV0ZSAtLS0tLS0tLS0tLS0tLS0tLS0tLQ0KPj4+Pj4+Pj4gMTEwLjMwcw0K Pj4+Pj4+Pj4gcmVwb19idWlsZCA6IEluc3RhbGwgcGFja2FnZXMgLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tDQo+Pj4+Pj4+PiA2OC4yNnMNCj4+Pj4+Pj4+IHJlcG9f YnVpbGQgOiBDbG9uZSBnaXQgcmVwb3NpdG9yaWVzIGFzeW5jaHJvbm91c2x5IC0tLS0tLS0tLS0t LS0tLS0tLS0tLQ0KPj4+Pj4+Pj4gNTkuODVzDQo+Pj4+Pj4+PiBwaXBfaW5zdGFsbCA6IEluc3Rh bGwgZGlzdHJvIHBhY2thZ2VzIC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0NCj4+ Pj4+Pj4+IDM2Ljcycw0KPj4+Pj4+Pj4gZ2FsZXJhX2NsaWVudCA6IEluc3RhbGwgZ2FsZXJhIGRp c3RybyBwYWNrYWdlcyAtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tDQo+Pj4+Pj4+PiAzMy4yMXMN Cj4+Pj4+Pj4+IGhhcHJveHlfc2VydmVyIDogQ3JlYXRlIGhhcHJveHkgc2VydmljZSBjb25maWcg ZmlsZXMgLS0tLS0tLS0tLS0tLS0tLS0tLQ0KPj4+Pj4+Pj4gMzAuODFzDQo+Pj4+Pj4+PiByZXBv X2J1aWxkIDogRXhlY3V0ZSB0aGUgdmVudiBidWlsZCBzY3JpcHRzIGFzeW5jaG9ub3VzbHkgLS0t LS0tLS0tLS0tLS0NCj4+Pj4+Pj4+IDI5LjY5cw0KPj4+Pj4+Pj4gcGlwX2luc3RhbGwgOiBJbnN0 YWxsIGRpc3RybyBwYWNrYWdlcyAtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tDQo+ Pj4+Pj4+PiAyMy41NnMNCj4+Pj4+Pj4+IHJlcG9fc2VydmVyIDogSW5zdGFsbCByZXBvIHNlcnZl ciBwYWNrYWdlcyAtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQ0KPj4+Pj4+Pj4gMjAuMTFz DQo+Pj4+Pj4+PiBtZW1jYWNoZWRfc2VydmVyIDogSW5zdGFsbCBkaXN0cm8gcGFja2FnZXMgLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0NCj4+Pj4+Pj4+IDE2LjM1cw0KPj4+Pj4+Pj4gcmVw b19idWlsZCA6IENyZWF0ZSB2ZW52IGJ1aWxkIG9wdGlvbnMgZmlsZXMgLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tDQo+Pj4+Pj4+PiAxNC41N3MNCj4+Pj4+Pj4+IGhhcHJveHlfc2VydmVyIDog SW5zdGFsbCBIQVByb3h5IFBhY2thZ2VzDQo+Pj4+Pj4+PiAtLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tIDguMzVzDQo+Pj4+Pj4+PiByc3lzbG9nX2NsaWVudCA6IEluc3RhbGwgcnN5c2xv ZyBwYWNrYWdlcw0KPj4+Pj4+Pj4gLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLSA4LjMz cw0KPj4+Pj4+Pj4gcnN5c2xvZ19jbGllbnQgOiBJbnN0YWxsIHJzeXNsb2cgcGFja2FnZXMNCj4+ Pj4+Pj4+IC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0gNy42NHMNCj4+Pj4+Pj4+IHJz eXNsb2dfY2xpZW50IDogSW5zdGFsbCByc3lzbG9nIHBhY2thZ2VzDQo+Pj4+Pj4+PiAtLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tIDcuNDJzDQo+Pj4+Pj4+PiByZXBvX2J1aWxkIDogV2Fp dCBmb3IgZ2l0IGNsb25lcyB0byBjb21wbGV0ZQ0KPj4+Pj4+Pj4gLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLSA3LjI1cw0KPj4+Pj4+Pj4gcmVwb19zZXJ2ZXIgOiBJbnN0YWxsIHJlcG8gY2Fj aGluZyBzZXJ2ZXIgcGFja2FnZXMNCj4+Pj4+Pj4+IC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0gNC43 NnMNCj4+Pj4+Pj4+IGdhbGVyYV9zZXJ2ZXIgOiBDaGVjayB0aGF0IFdTUkVQIGlzIHJlYWR5DQo+ Pj4+Pj4+PiAtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tIDQuMThzDQo+Pj4+Pj4+PiBy ZXBvX3NlcnZlciA6IEdpdCBzZXJ2aWNlIGRhdGEgZm9sZGVyIHNldHVwDQo+Pj4+Pj4+PiAtLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLSA0LjA0cw0KPj4+Pj4+Pj4gKysgZXhpdF9mYWlsIDM0 MSAwDQo+Pj4+Pj4+PiArKyBzZXQgK3gNCj4+Pj4+Pj4+ICsrIGluZm9fYmxvY2sgJ0Vycm9yIElu Zm8gLSAzNDEnIDANCj4+Pj4+Pj4+ICsrIGVjaG8NCj4+Pj4+Pj4+IC0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0NCj4+ Pj4+Pj4+IC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0NCj4+Pj4+Pj4+ICsrIHByaW50X2luZm8gJ0Vycm9yIEluZm8g LSAzNDEnIDANCj4+Pj4+Pj4+ICsrIFBST0NfTkFNRT0nLSBbIEVycm9yIEluZm8gLSAzNDEgMCBd IC0nDQo+Pj4+Pj4+PiArKyBwcmludGYgJ1xuJXMlc1xuJyAnLSBbIEVycm9yIEluZm8gLSAzNDEg MCBdIC0nDQo+Pj4+Pj4+PiAtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLQ0KPj4+Pj4+Pj4gDQo+Pj4+Pj4+PiAtIFsgRXJyb3IgSW5mbyAtIDM0MSAwIF0gLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tDQo+Pj4+Pj4+PiArKyBlY2hv DQo+Pj4+Pj4+PiAtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tDQo+Pj4+Pj4+PiAtLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tDQo+Pj4+Pj4+ PiArKyBleGl0X3N0YXRlIDENCj4+Pj4+Pj4+ICsrIHNldCAreA0KPj4+Pj4+Pj4gLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLQ0KPj4+Pj4+Pj4gDQo+Pj4+Pj4+PiAtIFsgUnVuIFRpbWUgPSAyMDMwIHNlY29uZHMgfHwg MzMgbWludXRlcyBdIC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tDQo+Pj4+Pj4+PiAtLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tDQo+Pj4+Pj4+PiAtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tDQo+Pj4+Pj4+PiANCj4+Pj4+Pj4+IC0gWyBT dGF0dXM6IEZhaWx1cmUgXSAtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0NCj4+Pj4+Pj4+IC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0NCj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4gDQo+ Pj4+Pj4+PiANCj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4gSSBkb24ndCBrbm93IHdoeSBpdCBmYWlsZWQN Cj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4gYnV0IGkgdHJpZWQgZm9sbG93aW5nOg0KPj4+Pj4+Pj4gDQo+ Pj4+Pj4+PiBbcm9vdEBhaW8gfl0jIGx4Yy1scyAtZg0KPj4+Pj4+Pj4gTkFNRSAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICBTVEFURSAgIEFVVE9TVEFSVCBHUk9VUFMNCj4+ Pj4+Pj4+ICAgICAgIElQVjQgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgSVBWNg0KPj4+Pj4+Pj4gYWlvMV9jaW5kZXJfYXBpX2NvbnRhaW5lci0yYWY0ZGQwMSAgICAg ICAgICBSVU5OSU5HIDEgICAgICAgICBvbmJvb3QsDQo+Pj4+Pj4+PiBvcGVuc3RhY2sgMTAuMjU1 LjI1NS42MiwgMTcyLjI5LjIzOC4yMTAsIDE3Mi4yOS4yNDQuMTUyICAtDQo+Pj4+Pj4+PiBhaW8x X2NpbmRlcl9zY2hlZHVsZXJfY29udGFpbmVyLTQ1NGRiMWZiICAgIFJVTk5JTkcgMSAgICAgICAg IG9uYm9vdCwNCj4+Pj4+Pj4+IG9wZW5zdGFjayAxMC4yNTUuMjU1LjExNywgMTcyLjI5LjIzOS4x NzIgICAgICAgICAgICAgICAgIC0NCj4+Pj4+Pj4+IGFpbzFfZGVzaWduYXRlX2NvbnRhaW5lci1m N2VhM2Y3MyAgICAgICAgICAgUlVOTklORyAxICAgICAgICAgb25ib290LA0KPj4+Pj4+Pj4gb3Bl bnN0YWNrIDEwLjI1NS4yNTUuMjM1LCAxNzIuMjkuMjM5LjE2NiAgICAgICAgICAgICAgICAgLQ0K Pj4+Pj4+Pj4gYWlvMV9nYWxlcmFfY29udGFpbmVyLTRmNDg4ZjZhICAgICAgICAgICAgICBSVU5O SU5HIDEgICAgICAgICBvbmJvb3QsDQo+Pj4+Pj4+PiBvcGVuc3RhY2sgMTAuMjU1LjI1NS4xOTMs IDE3Mi4yOS4yMzYuNjkgICAgICAgICAgICAgICAgICAtDQo+Pj4+Pj4+PiBhaW8xX2dsYW5jZV9j b250YWluZXItZjhjYWE5ZTYgICAgICAgICAgICAgIFJVTk5JTkcgMSAgICAgICAgIG9uYm9vdCwN Cj4+Pj4+Pj4+IG9wZW5zdGFjayAxMC4yNTUuMjU1LjIyNSwgMTcyLjI5LjIzOS41MiwgMTcyLjI5 LjI0Ni4yNSAgIC0NCj4+Pj4+Pj4+IGFpbzFfaGVhdF9hcGlfY29udGFpbmVyLTgzMjFhNzYzICAg ICAgICAgICAgUlVOTklORyAxICAgICAgICAgb25ib290LA0KPj4+Pj4+Pj4gb3BlbnN0YWNrIDEw LjI1NS4yNTUuMTA0LCAxNzIuMjkuMjM2LjE4NiAgICAgICAgICAgICAgICAgLQ0KPj4+Pj4+Pj4g YWlvMV9oZWF0X2FwaXNfY29udGFpbmVyLTNmNzBhZDc0ICAgICAgICAgICBSVU5OSU5HIDEgICAg ICAgICBvbmJvb3QsDQo+Pj4+Pj4+PiBvcGVuc3RhY2sgMTAuMjU1LjI1NS4xNjYsIDE3Mi4yOS4y MzkuMTMgICAgICAgICAgICAgICAgICAtDQo+Pj4+Pj4+PiBhaW8xX2hlYXRfZW5naW5lX2NvbnRh aW5lci1hMThlNWEwYSAgICAgICAgIFJVTk5JTkcgMSAgICAgICAgIG9uYm9vdCwNCj4+Pj4+Pj4+ IG9wZW5zdGFjayAxMC4yNTUuMjU1LjExOCwgMTcyLjI5LjIzOC43ICAgICAgICAgICAgICAgICAg IC0NCj4+Pj4+Pj4+IGFpbzFfaG9yaXpvbl9jb250YWluZXItZTQ5MzI3NWMgICAgICAgICAgICAg UlVOTklORyAxICAgICAgICAgb25ib290LA0KPj4+Pj4+Pj4gb3BlbnN0YWNrIDEwLjI1NS4yNTUu OTgsIDE3Mi4yOS4yMzcuNDMgICAgICAgICAgICAgICAgICAgLQ0KPj4+Pj4+Pj4gYWlvMV9rZXlz dG9uZV9jb250YWluZXItYzBlMjNlMTQgICAgICAgICAgICBSVU5OSU5HIDEgICAgICAgICBvbmJv b3QsDQo+Pj4+Pj4+PiBvcGVuc3RhY2sgMTAuMjU1LjI1NS42MCwgMTcyLjI5LjIzNy4xNjUgICAg ICAgICAgICAgICAgICAtDQo+Pj4+Pj4+PiBhaW8xX21lbWNhY2hlZF9jb250YWluZXItZWY4ZmVk NGMgICAgICAgICAgIFJVTk5JTkcgMSAgICAgICAgIG9uYm9vdCwNCj4+Pj4+Pj4+IG9wZW5zdGFj ayAxMC4yNTUuMjU1LjIxNCwgMTcyLjI5LjIzOC4yMTEgICAgICAgICAgICAgICAgIC0NCj4+Pj4+ Pj4+IGFpbzFfbmV1dHJvbl9hZ2VudHNfY29udGFpbmVyLTEzMWU5OTZlICAgICAgUlVOTklORyAx ICAgICAgICAgb25ib290LA0KPj4+Pj4+Pj4gb3BlbnN0YWNrIDEwLjI1NS4yNTUuMTUzLCAxNzIu MjkuMjM3LjI0NiwgMTcyLjI5LjI0My4yMjcgLQ0KPj4+Pj4+Pj4gYWlvMV9uZXV0cm9uX3NlcnZl cl9jb250YWluZXItY2NkNjkzOTQgICAgICBSVU5OSU5HIDEgICAgICAgICBvbmJvb3QsDQo+Pj4+ Pj4+PiBvcGVuc3RhY2sgMTAuMjU1LjI1NS4yNywgMTcyLjI5LjIzNi4xMjkgICAgICAgICAgICAg ICAgICAtDQo+Pj4+Pj4+PiBhaW8xX25vdmFfYXBpX2NvbnRhaW5lci03MzI3NDAyNCAgICAgICAg ICAgIFJVTk5JTkcgMSAgICAgICAgIG9uYm9vdCwNCj4+Pj4+Pj4+IG9wZW5zdGFjayAxMC4yNTUu MjU1LjQyLCAxNzIuMjkuMjM4LjIwMSAgICAgICAgICAgICAgICAgIC0NCj4+Pj4+Pj4+IGFpbzFf bm92YV9hcGlfbWV0YWRhdGFfY29udGFpbmVyLWExZDMyMjgyICAgUlVOTklORyAxICAgICAgICAg b25ib290LA0KPj4+Pj4+Pj4gb3BlbnN0YWNrIDEwLjI1NS4yNTUuMjE4LCAxNzIuMjkuMjM4LjE1 MyAgICAgICAgICAgICAgICAgLQ0KPj4+Pj4+Pj4gYWlvMV9ub3ZhX2FwaV9vc19jb21wdXRlX2Nv bnRhaW5lci01MjcyNTk0MCBSVU5OSU5HIDEgICAgICAgICBvbmJvb3QsDQo+Pj4+Pj4+PiBvcGVu c3RhY2sgMTAuMjU1LjI1NS4xMDksIDE3Mi4yOS4yMzYuMTI2ICAgICAgICAgICAgICAgICAtDQo+ Pj4+Pj4+PiBhaW8xX25vdmFfYXBpX3BsYWNlbWVudF9jb250YWluZXItMDU4ZTgwMzEgIFJVTk5J TkcgMSAgICAgICAgIG9uYm9vdCwNCj4+Pj4+Pj4+IG9wZW5zdGFjayAxMC4yNTUuMjU1LjI5LCAx NzIuMjkuMjM2LjE1NyAgICAgICAgICAgICAgICAgIC0NCj4+Pj4+Pj4+IGFpbzFfbm92YV9jb25k dWN0b3JfY29udGFpbmVyLTliNmIyMDhjICAgICAgUlVOTklORyAxICAgICAgICAgb25ib290LA0K Pj4+Pj4+Pj4gb3BlbnN0YWNrIDEwLjI1NS4yNTUuMTgsIDE3Mi4yOS4yMzkuOSAgICAgICAgICAg ICAgICAgICAgLQ0KPj4+Pj4+Pj4gYWlvMV9ub3ZhX2NvbnNvbGVfY29udGFpbmVyLTBmYjg5OTVj ICAgICAgICBSVU5OSU5HIDEgICAgICAgICBvbmJvb3QsDQo+Pj4+Pj4+PiBvcGVuc3RhY2sgMTAu MjU1LjI1NS40NywgMTcyLjI5LjIzNy4xMjkgICAgICAgICAgICAgICAgICAtDQo+Pj4+Pj4+PiBh aW8xX25vdmFfc2NoZWR1bGVyX2NvbnRhaW5lci04ZjdhNjU3YSAgICAgIFJVTk5JTkcgMSAgICAg ICAgIG9uYm9vdCwNCj4+Pj4+Pj4+IG9wZW5zdGFjayAxMC4yNTUuMjU1LjE5NSwgMTcyLjI5LjIz OC4xMTMgICAgICAgICAgICAgICAgIC0NCj4+Pj4+Pj4+IGFpbzFfcmFiYml0X21xX2NvbnRhaW5l ci1jMzQ1MGQ2NiAgICAgICAgICAgUlVOTklORyAxICAgICAgICAgb25ib290LA0KPj4+Pj4+Pj4g b3BlbnN0YWNrIDEwLjI1NS4yNTUuMTExLCAxNzIuMjkuMjM3LjIwMiAgICAgICAgICAgICAgICAg LQ0KPj4+Pj4+Pj4gYWlvMV9yZXBvX2NvbnRhaW5lci04ZTA3ZmRlZiAgICAgICAgICAgICAgICBS VU5OSU5HIDEgICAgICAgICBvbmJvb3QsDQo+Pj4+Pj4+PiBvcGVuc3RhY2sgMTAuMjU1LjI1NS4x NDEsIDE3Mi4yOS4yMzkuNzkgICAgICAgICAgICAgICAgICAtDQo+Pj4+Pj4+PiBhaW8xX3JzeXNs b2dfY29udGFpbmVyLWIxOThmYmU1ICAgICAgICAgICAgIFJVTk5JTkcgMSAgICAgICAgIG9uYm9v dCwNCj4+Pj4+Pj4+IG9wZW5zdGFjayAxMC4yNTUuMjU1LjEzLCAxNzIuMjkuMjM2LjE5NSAgICAg ICAgICAgICAgICAgIC0NCj4+Pj4+Pj4+IGFpbzFfc3dpZnRfcHJveHlfY29udGFpbmVyLTFhMzUz NmUxICAgICAgICAgUlVOTklORyAxICAgICAgICAgb25ib290LA0KPj4+Pj4+Pj4gb3BlbnN0YWNr IDEwLjI1NS4yNTUuMTA4LCAxNzIuMjkuMjM3LjMxLCAxNzIuMjkuMjQ0LjI0OCAgLQ0KPj4+Pj4+ Pj4gYWlvMV91dGlsaXR5X2NvbnRhaW5lci1iZDEwNmYxMSAgICAgICAgICAgICBSVU5OSU5HIDEg ICAgICAgICBvbmJvb3QsDQo+Pj4+Pj4+PiBvcGVuc3RhY2sgMTAuMjU1LjI1NS41NCwgMTcyLjI5 LjIzOS4xMjQgICAgICAgICAgICAgICAgICAtDQo+Pj4+Pj4+PiBbcm9vdEBhaW8gfl0jIGx4Yy1h DQo+Pj4+Pj4+PiBseGMtYXR0YWNoICAgICBseGMtYXV0b3N0YXJ0DQo+Pj4+Pj4+PiBbcm9vdEBh aW8gfl0jIGx4Yy1hdHRhY2ggLW4gYWlvMV91dGlsaXR5X2NvbnRhaW5lci1iZDEwNmYxMQ0KPj4+ Pj4+Pj4gW3Jvb3RAYWlvMS11dGlsaXR5LWNvbnRhaW5lci1iZDEwNmYxMSB+XSMNCj4+Pj4+Pj4+ IFtyb290QGFpbzEtdXRpbGl0eS1jb250YWluZXItYmQxMDZmMTEgfl0jIHNvdXJjZSAvcm9vdC9v cGVucmMNCj4+Pj4+Pj4+IFtyb290QGFpbzEtdXRpbGl0eS1jb250YWluZXItYmQxMDZmMTEgfl0j IG9wZW5zdGFjaw0KPj4+Pj4+Pj4gb3BlbnN0YWNrICAgICAgICAgICAgICAgICAgICAgICAgIG9w ZW5zdGFjay1ob3N0LWhvc3RmaWxlLXNldHVwLnNoDQo+Pj4+Pj4+PiBbcm9vdEBhaW8xLXV0aWxp dHktY29udGFpbmVyLWJkMTA2ZjExIH5dIyBvcGVuc3RhY2sNCj4+Pj4+Pj4+IG9wZW5zdGFjayAg ICAgICAgICAgICAgICAgICAgICAgICBvcGVuc3RhY2staG9zdC1ob3N0ZmlsZS1zZXR1cC5zaA0K Pj4+Pj4+Pj4gW3Jvb3RAYWlvMS11dGlsaXR5LWNvbnRhaW5lci1iZDEwNmYxMSB+XSMgb3BlbnN0 YWNrIHVzZXIgbGlzdA0KPj4+Pj4+Pj4gRmFpbGVkIHRvIGRpc2NvdmVyIGF2YWlsYWJsZSBpZGVu dGl0eSB2ZXJzaW9ucyB3aGVuIGNvbnRhY3RpbmcNCj4+Pj4+Pj4+IGh0dHA6Ly8xNzIuMjkuMjM2 LjEwMDo1MDAwL3YzLiBBdHRlbXB0aW5nIHRvIHBhcnNlIHZlcnNpb24gZnJvbSBVUkwuDQo+Pj4+ Pj4+PiBTZXJ2aWNlIFVuYXZhaWxhYmxlIChIVFRQIDUwMykNCj4+Pj4+Pj4+IFtyb290QGFpbzEt dXRpbGl0eS1jb250YWluZXItYmQxMDZmMTEgfl0jDQo+Pj4+Pj4+PiANCj4+Pj4+Pj4+IA0KPj4+ Pj4+Pj4gbm90IHN1cmUgd2hhdCBpcyB0aGlzIGVycm9yID8NCj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4g DQo+Pj4+Pj4+PiBPbiBTYXQsIEZlYiAzLCAyMDE4IGF0IDc6MjkgUE0sIFNhdGlzaCBQYXRlbCA8 c2F0aXNoLnR4dEBnbWFpbC5jb20+DQo+Pj4+Pj4+PiB3cm90ZToNCj4+Pj4+Pj4+PiBJIGhhdmUg dGlyZWQgZXZlcnl0aGluZyBidXQgZGlkbid0IGFibGUgdG8gZmluZCBzb2x1dGlvbiA6KCAgd2hh dCBpIGFtDQo+Pj4+Pj4+Pj4gZG9pbmcgd3JvbmcgaGVyZSwgaSBhbSBmb2xsb3dpbmcgdGhpcyBp bnN0cnVjdGlvbiBhbmQgcGxlYXNlIGxldCBtZQ0KPj4+Pj4+Pj4+IGtub3cgaWYgaSBhbSB3cm9u Zw0KPj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4+IGh0dHBzOi8vZGV2ZWxvcGVyLnJh Y2tzcGFjZS5jb20vYmxvZy9saWZlLXdpdGhvdXQtZGV2c3RhY2stb3BlbnN0YWNrLWRldmVsb3Bt ZW50LXdpdGgtb3NhLw0KPj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4+IEkgaGF2ZSBDZW50T1M3LCB3aXRo IDggQ1BVIGFuZCAxNkdCIG1lbW9yeSB3aXRoIDEwMEdCIGRpc2sgc2l6ZS4NCj4+Pj4+Pj4+PiAN Cj4+Pj4+Pj4+PiBFcnJvcjogaHR0cDovL3Bhc3RlLm9wZW5zdGFjay5vcmcvc2hvdy82NjA0OTcv DQo+Pj4+Pj4+Pj4gDQo+Pj4+Pj4+Pj4gDQo+Pj4+Pj4+Pj4gSSBoYXZlIHRpcmVkIGdhdGUtY2hl Y2stY29tbWl0LnNoIGJ1dCBzYW1lIGVycm9yIDooDQo+Pj4+Pj4+Pj4gDQo+Pj4+Pj4+Pj4gDQo+ Pj4+Pj4+Pj4gDQo+Pj4+Pj4+Pj4gT24gU2F0LCBGZWIgMywgMjAxOCBhdCAxOjExIEFNLCBTYXRp c2ggUGF0ZWwgPHNhdGlzaC50eHRAZ21haWwuY29tPg0KPj4+Pj4+Pj4+IHdyb3RlOg0KPj4+Pj4+ Pj4+PiBJIGhhdmUgc3RhcnRlZCBwbGF5aW5nIHdpdGggb3BlbnN0YWNrLWFuc2libGUgb24gQ2Vu dE9TNyBhbmQgdHJ5aW5nIHRvDQo+Pj4+Pj4+Pj4+IGluc3RhbGwgQWxsLWluLW9uZSBidXQgZ290 IHRoaXMgZXJyb3IgYW5kIG5vdCBzdXJlIHdoYXQgY2F1c2UgdGhhdA0KPj4+Pj4+Pj4+PiBlcnJv ciBob3cgZG8gaSB0cm91Ymxlc2hvb3QgaXQ/DQo+Pj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4+PiANCj4+ Pj4+Pj4+Pj4gVEFTSyBbYm9vdHN0cmFwLWhvc3QgOiBSZW1vdmUgYW4gZXhpc3RpbmcgcHJpdmF0 ZS9wdWJsaWMgc3NoIGtleXMgaWYNCj4+Pj4+Pj4+Pj4gb25lIGlzIG1pc3NpbmddDQo+Pj4+Pj4+ Pj4+IA0KPj4+Pj4+Pj4+PiAqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioNCj4+Pj4+Pj4+Pj4gc2tpcHBpbmc6IFts b2NhbGhvc3RdID0+IChpdGVtPWlkX3JzYSkNCj4+Pj4+Pj4+Pj4gc2tpcHBpbmc6IFtsb2NhbGhv c3RdID0+IChpdGVtPWlkX3JzYS5wdWIpDQo+Pj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4+PiBUQVNLIFti b290c3RyYXAtaG9zdCA6IENyZWF0ZSBzc2gga2V5IHBhaXIgZm9yIHJvb3RdDQo+Pj4+Pj4+Pj4+ IA0KPj4+Pj4+Pj4+PiAqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKg0KPj4+Pj4+Pj4+PiBvazogW2xvY2FsaG9zdF0NCj4+Pj4+Pj4+Pj4gDQo+Pj4+Pj4+Pj4+ IFRBU0sgW2Jvb3RzdHJhcC1ob3N0IDogRmV0Y2ggdGhlIGdlbmVyYXRlZCBwdWJsaWMgc3NoIGtl eV0NCj4+Pj4+Pj4+Pj4gDQo+Pj4+Pj4+Pj4+ICoqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqDQo+Pj4+Pj4+Pj4+IGNoYW5nZWQ6IFtsb2NhbGhvc3RdDQo+Pj4+Pj4+Pj4+ IA0KPj4+Pj4+Pj4+PiBUQVNLIFtib290c3RyYXAtaG9zdCA6IEVuc3VyZSByb290J3MgbmV3IHB1 YmxpYyBzc2gga2V5IGlzIGluDQo+Pj4+Pj4+Pj4+IGF1dGhvcml6ZWRfa2V5c10NCj4+Pj4+Pj4+ Pj4gDQo+Pj4+Pj4+Pj4+ICoqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKg0KPj4+Pj4+Pj4+PiBvazogW2xv Y2FsaG9zdF0NCj4+Pj4+Pj4+Pj4gDQo+Pj4+Pj4+Pj4+IFRBU0sgW2Jvb3RzdHJhcC1ob3N0IDog Q3JlYXRlIHRoZSByZXF1aXJlZCBkZXBsb3ltZW50IGRpcmVjdG9yaWVzXQ0KPj4+Pj4+Pj4+PiAN Cj4+Pj4+Pj4+Pj4gKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqDQo+Pj4+Pj4+Pj4+ IGNoYW5nZWQ6IFtsb2NhbGhvc3RdID0+IChpdGVtPS9ldGMvb3BlbnN0YWNrX2RlcGxveSkNCj4+ Pj4+Pj4+Pj4gY2hhbmdlZDogW2xvY2FsaG9zdF0gPT4gKGl0ZW09L2V0Yy9vcGVuc3RhY2tfZGVw bG95L2NvbmYuZCkNCj4+Pj4+Pj4+Pj4gY2hhbmdlZDogW2xvY2FsaG9zdF0gPT4gKGl0ZW09L2V0 Yy9vcGVuc3RhY2tfZGVwbG95L2Vudi5kKQ0KPj4+Pj4+Pj4+PiANCj4+Pj4+Pj4+Pj4gVEFTSyBb Ym9vdHN0cmFwLWhvc3QgOiBEZXBsb3kgdXNlciBjb25mLmQgY29uZmlndXJhdGlvbl0NCj4+Pj4+ Pj4+Pj4gDQo+Pj4+Pj4+Pj4+ICoqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioNCj4+Pj4+Pj4+Pj4gZmF0YWw6IFtsb2NhbGhvc3RdOiBGQUlMRUQhID0+IHsibXNnIjog Int7DQo+Pj4+Pj4+Pj4+IGNvbmZkX292ZXJyaWRlc1tib290c3RyYXBfaG9zdF9zY2VuYXJpb10g fX06ICdkaWN0IG9iamVjdCcgaGFzIG5vDQo+Pj4+Pj4+Pj4+IGF0dHJpYnV0ZSB1J2FpbycifQ0K Pj4+Pj4+Pj4+PiANCj4+Pj4+Pj4+Pj4gUlVOTklORyBIQU5ETEVSIFtzc2hkIDogUmVsb2FkIHRo ZSBTU0ggc2VydmljZV0NCj4+Pj4+Pj4+Pj4gDQo+Pj4+Pj4+Pj4+ICoqKioqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioNCj4+Pj4+Pj4+Pj4gICAgICB0byBy ZXRyeSwgdXNlOiAtLWxpbWl0DQo+Pj4+Pj4+Pj4+IEAvb3B0L29wZW5zdGFjay1hbnNpYmxlL3Rl c3RzL2Jvb3RzdHJhcC1haW8ucmV0cnkNCj4+Pj4+Pj4+Pj4gDQo+Pj4+Pj4+Pj4+IFBMQVkgUkVD QVANCj4+Pj4+Pj4+Pj4gKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioNCj4+Pj4+Pj4+ Pj4gbG9jYWxob3N0ICAgICAgICAgICAgICAgICAgOiBvaz02MSAgIGNoYW5nZWQ9MzYgICB1bnJl YWNoYWJsZT0wDQo+Pj4+Pj4+Pj4+IGZhaWxlZD0yDQo+Pj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4+PiBb cm9vdEBhaW8gb3BlbnN0YWNrLWFuc2libGVdIw0KPj4+Pj4+PiANCj4+Pj4+Pj4gX19fX19fX19f X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18NCj4+Pj4+Pj4gTWFpbGluZyBs aXN0Og0KPj4+Pj4+PiBodHRwOi8vbGlzdHMub3BlbnN0YWNrLm9yZy9jZ2ktYmluL21haWxtYW4v bGlzdGluZm8vb3BlbnN0YWNrDQo+Pj4+Pj4+IFBvc3QgdG8gICAgIDogb3BlbnN0YWNrQGxpc3Rz Lm9wZW5zdGFjay5vcmcNCj4+Pj4+Pj4gVW5zdWJzY3JpYmUgOg0KPj4+Pj4+PiBodHRwOi8vbGlz dHMub3BlbnN0YWNrLm9yZy9jZ2ktYmluL21haWxtYW4vbGlzdGluZm8vb3BlbnN0YWNrDQo+Pj4+ Pj4gDQo+Pj4+Pj4gDQo+Pj4+PiANCj4+Pj4+IF9fX19fX19fX19fX19fX19fX19fX19fX19fX19f X19fX19fX19fX19fX19fX19fDQo+Pj4+PiBNYWlsaW5nIGxpc3Q6IGh0dHA6Ly9saXN0cy5vcGVu c3RhY2sub3JnL2NnaS1iaW4vbWFpbG1hbi9saXN0aW5mby9vcGVuc3RhY2sNCj4+Pj4+IFBvc3Qg dG8gICAgIDogb3BlbnN0YWNrQGxpc3RzLm9wZW5zdGFjay5vcmcNCj4+Pj4+IFVuc3Vic2NyaWJl IDogaHR0cDovL2xpc3RzLm9wZW5zdGFjay5vcmcvY2dpLWJpbi9tYWlsbWFuL2xpc3RpbmZvL29w ZW5zdGFjaw0K --=_99d8f260a69d7936d3ced6f3be6bc645-- From remo at italy1.com Sun Feb 4 19:18:35 2018 From: remo at italy1.com (remo at italy1.com) Date: Sun, 4 Feb 2018 11:18:35 -0800 Subject: [Openstack] openstack-ansible aio error In-Reply-To: References: <2D5C1208-BA0A-491D-B232-6B0AB3EBC8CF@italy1.com> <1C807435-61CA-4A09-AD82-C6F3B91CEC03@italy1.com> Message-ID: <7CD76E10-3819-4C9E-BEA3-571F3FBACD6C@italy1.com> Content-Type: multipart/alternative; boundary="=_99d8f260a69d7936d3ced6f3be6bc645" --=_99d8f260a69d7936d3ced6f3be6bc645 Content-Transfer-Encoding: base64 Content-Type: text/plain; charset=utf-8 Tm90IHN1cmUgYWJvdXQgdGhhdCB0cmlwbGVvIGlzIHZlcnkgY29tcGxpY2F0ZWQgYW5kIHJlYWR5 IGZvciBwcm9kdWN0aW9uIHdoZXJlIGFuc2libGUgT3BlblN0YWNrIGlzIHByb2JhYmx5IG5vdC4g DQoNCklmIHlvdSB3YW50IHRvIGxlYXJuIHN1cmUgYnV0IGxldOKAmXMgbG9vayBhdCB0aGUgZmFj dHMgcHJvZHVjdGlvbiBuZWVkcyBzb21ldGhpbmcgbW9yZSB0aGFuIHdoYXQgYW5zaWJsZSBPcGVu U3RhY2sgY2FuIG5vdyBvZmZlci4gDQoNCj4gSWwgZ2lvcm5vIDA0IGZlYiAyMDE4LCBhbGxlIG9y ZSAxMToxNCwgU2F0aXNoIFBhdGVsIDxzYXRpc2gudHh0QGdtYWlsLmNvbT4gaGEgc2NyaXR0bzoN Cj4gDQo+IDopDQo+IA0KPiBJIGFtIGdvaW5nIHRvIHRyeSBvcGVuc3RhY2stYW5zaWJsZSBhbmQg YW5kIGlmIGkgYW0gbHVja3kgaSB3aWxsDQo+IGNvbnRpbnVlIGFuZCBwbGFuIHRvIGRlcGxveSBv biBwcm9kdWN0aW9uIGJ1dCBpZiBpdCB3aWxsIHRha2UgdG9vIG11Y2gNCj4gbXkgdGltZSB0byBk ZWJ1ZyB0aGVuIGkgd291bGQgZ28gd2l0aCB0cmlwbGVPIHdoaWNoIHNlZW1zIGxlc3MNCj4gY29t cGxpY2F0ZWQgc28gZmFyLg0KPiANCj4gQXMgeW91IHNhaWQgb3BlbnN0YWNrLWFuc2libGUgaGFz IGdvb2QgdWJ1bnR1IGNvbW11bml0eSBhbmQgd2UgYXJlDQo+IDEwMCUgQ2VudE9TIHNob3AgYW5k IGkgd2FudCBzb21ldGhpbmcgd2hpY2ggd2UgYXJlIGNvbWZvcnRhYmxlIGFuZA0KPiBzdXBwb3J0 ZWQgYnkgZGVwbG95bWVudCB0b29sLg0KPiANCj4gTXkgZmlyc3QgcHJvZHVjdGlvbiBjbHVzdGVy IGlzIDIwIG5vZGUgYnV0IGl0IG1heSBzbG93bHkgZ3JvdyBpZiBhbGwgZ29lcyB3ZWxsLg0KPiAN Cj4+IE9uIFN1biwgRmViIDQsIDIwMTggYXQgMjowNSBQTSwgIDxyZW1vQGl0YWx5MS5jb20+IHdy b3RlOg0KPj4gVHJpcGxlbyA9IGhhDQo+PiBQYWNrc3RhY2sgPSBubyBoYQ0KPj4gDQo+Pj4gSWwg Z2lvcm5vIDA0IGZlYiAyMDE4LCBhbGxlIG9yZSAxMTowMCwgU2F0aXNoIFBhdGVsIDxzYXRpc2gu dHh0QGdtYWlsLmNvbT4gaGEgc2NyaXR0bzoNCj4+PiANCj4+PiBKdXN0IHdvbmRlcmluZyB3aHkg ZGlkIHlvdSBzYXkgd2UgY2FuJ3QgZG8gSEEgd2l0aCBUcmlwbGVPPyAgSSB0aG91Z2h0DQo+Pj4g aXQgZG9lcyBzdXBwb3J0IEhBLiBhbSBpIG1pc3Npbmcgc29tZXRoaW5nIGhlcmU/DQo+Pj4gDQo+ Pj4+IE9uIFN1biwgRmViIDQsIDIwMTggYXQgMTE6MjEgQU0sICA8cmVtb0BpdGFseTEuY29tPiB3 cm90ZToNCj4+Pj4gV2hhdCBhcmUgeW91IGxvb2tpbmcgZm9yIGhhPyBFdGMuIFRyaXBsZW8gaXMg dGhlIHdheSB0byBnbyBmb3IgdGhhdCBwYWNrc3RhY2sgaWYgeW91IHdhbnQgc2ltcGxlIGRlcGxv eW1lbnQgYnV0IG5vIGhhIG9mIGNvdXJzZS4NCj4+Pj4gDQo+Pj4+PiBJbCBnaW9ybm8gMDQgZmVi IDIwMTgsIGFsbGUgb3JlIDA3OjUzLCBTYXRpc2ggUGF0ZWwgPHNhdGlzaC50eHRAZ21haWwuY29t PiBoYSBzY3JpdHRvOg0KPj4+Pj4gDQo+Pj4+PiBIaSBNYXJjaW4sDQo+Pj4+PiANCj4+Pj4+IFRo YW5rIHlvdSwgaSB3aWxsIHRyeSBvdGhlciBsaW5rLCBhbHNvIGkgYW0gdXNpbmcgQ2VudE9TNyBi dXQgYW55d2F5DQo+Pj4+PiBub3cgcXVlc3Rpb24gaXMgZG9lcyBvcGVuc3RhY2stYW5zaWJsZSBy ZWFkeSBmb3IgcHJvZHVjdGlvbiBkZXBsb3ltZW50DQo+Pj4+PiBkZXNwaXRlIGdhbGVyYSBpc3N1 ZXMgYW5kIGJ1Zz8NCj4+Pj4+IA0KPj4+Pj4gSWYgaSB3YW50IHRvIGdvIG9uIHByb2R1Y3Rpb24g c2hvdWxkIGkgd2FpdCBvciBmaW5kIG90aGVyIHRvb2xzIHRvDQo+Pj4+PiBkZXBsb3kgb24gcHJv ZHVjdGlvbj8NCj4+Pj4+IA0KPj4+Pj4+IE9uIFN1biwgRmViIDQsIDIwMTggYXQgNToyOSBBTSwg TWFyY2luIER1bGFrIDxtYXJjaW4uZHVsYWtAZ21haWwuY29tPiB3cm90ZToNCj4+Pj4+PiBXaGVu IHBsYXlpbmcgd2l0aCBvcGVuc3RhY2stYW5zaWJsZSBkbyBpdCBpbiBhIHZpcnR1YWwgc2V0dXAg KGUuZy4gbmVzdGVkDQo+Pj4+Pj4gdmlydHVhbGl6YXRpb24gd2l0aCBsaWJ2aXJ0KSBzbyB5b3Ug Y2FuIHJlcHJvZHVjaWJseSBicmluZyB1cCB5b3VyDQo+Pj4+Pj4gZW52aXJvbm1lbnQgZnJvbSBz Y3JhdGNoLg0KPj4+Pj4+IFlvdSB3aWxsIGhhdmUgdG8gZG8gaXQgbXVsdGlwbGUgdGltZXMuDQo+ Pj4+Pj4gDQo+Pj4+Pj4gaHR0cHM6Ly9kZXZlbG9wZXIucmFja3NwYWNlLmNvbS9ibG9nL2xpZmUt d2l0aG91dC1kZXZzdGFjay1vcGVuc3RhY2stZGV2ZWxvcG1lbnQtd2l0aC1vc2EvDQo+Pj4+Pj4g aXMgbW9yZSB0aGFuIDIgeWVhcnMgb2xkLg0KPj4+Pj4+IA0KPj4+Pj4+IFRyeSB0byBmb2xsb3cN Cj4+Pj4+PiBodHRwczovL2RvY3Mub3BlbnN0YWNrLm9yZy9vcGVuc3RhY2stYW5zaWJsZS9sYXRl c3QvY29udHJpYnV0b3IvcXVpY2tzdGFydC1haW8uaHRtbA0KPj4+Pj4+IGJ1dCBnaXQgY2xvbmUg dGhlIGxhdGVzdCBzdGF0ZSBvZiB0aGUgb3BlbnN0YWNrLWFuc2libGUgcmVwby4NCj4+Pj4+PiBU aGUgYWJvdmUgcGFnZSBoYXMgYSBsaW5rIHRoYXQgY2FuIGJlIHVzZWQgdG8gc3VibWl0IGJ1Z3Mg ZGlyZWN0bHkgdG8gdGhlDQo+Pj4+Pj4gb3BlbnN0YWNrLWFuc2libGUgcHJvamVjdCBhdCBsYXVu Y2hwYWQuDQo+Pj4+Pj4gSW4gdGhpcyB3YXkgeW91IG1heSBiZSBhYmxlIHRvIGNsZWFudXAvaW1w cm92ZSB0aGUgZG9jdW1lbnRhdGlvbiwNCj4+Pj4+PiBhbmQgc2luY2UgeW91ciBzZXR1cCBpcyB0 aGUgc2ltcGxlc3QgcG9zc2libGUgb25lIHlvdXIgYnVnIHJlcG9ydHMgbWF5IGdldA0KPj4+Pj4+ IG5vdGljZWQgYW5kIHJlcHJvZHVjZWQgYnkgdGhlIGRldmVsb3BlcnMuDQo+Pj4+Pj4gV2hhdCBo YXBwZW5zIGlzIHRoYXQgbW9zdCBwZW9wbGUgdHJ5IG9wZW5zdGFjay1hbnNpYmxlLCBkb24ndCBy ZXBvcnQgYnVncywNCj4+Pj4+PiBvciByZXBvcnQgdGhlIGJ1Z3Mgd2l0aG91dCB0aGUgaW5mb3Jt YXRpb24gbmVjY2VzYXJ5DQo+Pj4+Pj4gdG8gcmVwcm9kdWNlIHRoZW0sIGFuZCBhYmFuZG9uIHRo ZSB3aG9sZSBpZGVhLg0KPj4+Pj4+IA0KPj4+Pj4+IFRyeSB0byBzZWFyY2gNCj4+Pj4+PiBodHRw czovL2J1Z3MubGF1bmNocGFkLm5ldC9vcGVuc3RhY2stYW5zaWJsZS8rYnVncz9maWVsZC5zZWFy Y2h0ZXh0PWdhbGVyYQ0KPj4+Pj4+IGZvciBpbnNwaXJhdGlvbiBhYm91dCB3aGF0IHRvIGRvLg0K Pj4+Pj4+IEN1cnJlbnRseSB0aGUgZ2FsZXJhIHNldHVwIGluIG9wZW5zdGFjay1hbnNpYmxlLCBl c3BlY2lhbGx5IG9uIGNlbnRvczcgc2VlbXMNCj4+Pj4+PiB0byBiZSB1bmRlcmdvaW5nIHNvbWUg Y3JpdGljYWwgY2hhbmdlcy4NCj4+Pj4+PiBFbnRlciB0aGUgZ2FsZXJhIGNvbnRhaW5lcjoNCj4+ Pj4+PiBseGMtYXR0YWNoIC1uIGFpbzFfZ2FsZXJhX2NvbnRhaW5lci00ZjQ4OGY2YQ0KPj4+Pj4+ IGxvb2sgYXJvdW5kIGl0LCBjaGVjayB3aGV0aGVyIG15c3FsZCBpcyBydW5uaW5nIGV0Yy4sIHRy eSB0byBpZGVudGlmeSB3aGljaA0KPj4+Pj4+IGFuc2libGUgdGFza3MgZmFpbGVkIGFuZCBydW4g dGhlbSBtYW51YWxseSBpbnNpZGUgb2YgdGhlIGNvbnRhaW5lci4NCj4+Pj4+PiANCj4+Pj4+PiBN YXJjaW4NCj4+Pj4+PiANCj4+Pj4+PiANCj4+Pj4+Pj4gT24gU3VuLCBGZWIgNCwgMjAxOCBhdCAz OjQxIEFNLCBTYXRpc2ggUGF0ZWwgPHNhdGlzaC50eHRAZ21haWwuY29tPiB3cm90ZToNCj4+Pj4+ Pj4gDQo+Pj4+Pj4+IEkgaGF2ZSBub3RpY2VkIGluIG91dHB1dCAiYWlvMV9nYWxlcmFfY29udGFp bmVyIiBpcyBmYWlsZWQsIGhvdyBkbyBpDQo+Pj4+Pj4+IGZpeGVkIHRoaXMga2luZCBvZiBpc3N1 ZT8NCj4+Pj4+Pj4gDQo+Pj4+Pj4+IA0KPj4+Pj4+PiANCj4+Pj4+Pj4gUExBWSBSRUNBUA0KPj4+ Pj4+PiAqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKg0KPj4+Pj4+PiBhaW8xICAgICAgICAgICAgICAgICAgICAgICA6IG9rPTQxICAgY2hhbmdl ZD00ICAgIHVucmVhY2hhYmxlPTANCj4+Pj4+Pj4gZmFpbGVkPTANCj4+Pj4+Pj4gYWlvMV9jaW5k ZXJfYXBpX2NvbnRhaW5lci0yYWY0ZGQwMSA6IG9rPTAgICAgY2hhbmdlZD0wDQo+Pj4+Pj4+IHVu cmVhY2hhYmxlPTAgICAgZmFpbGVkPTANCj4+Pj4+Pj4gYWlvMV9jaW5kZXJfc2NoZWR1bGVyX2Nv bnRhaW5lci00NTRkYjFmYiA6IG9rPTAgICAgY2hhbmdlZD0wDQo+Pj4+Pj4+IHVucmVhY2hhYmxl PTAgICAgZmFpbGVkPTANCj4+Pj4+Pj4gYWlvMV9kZXNpZ25hdGVfY29udGFpbmVyLWY3ZWEzZjcz IDogb2s9MCAgICBjaGFuZ2VkPTAgICAgdW5yZWFjaGFibGU9MA0KPj4+Pj4+PiBmYWlsZWQ9MA0K Pj4+Pj4+PiBhaW8xX2dhbGVyYV9jb250YWluZXItNGY0ODhmNmEgOiBvaz0zMiAgIGNoYW5nZWQ9 MyAgICB1bnJlYWNoYWJsZT0wDQo+Pj4+Pj4+IGZhaWxlZD0xDQo+Pj4+Pj4+IA0KPj4+Pj4+Pj4g T24gU2F0LCBGZWIgMywgMjAxOCBhdCA5OjI2IFBNLCBTYXRpc2ggUGF0ZWwgPHNhdGlzaC50eHRA Z21haWwuY29tPiB3cm90ZToNCj4+Pj4+Pj4+IEkgaGF2ZSByZS1pbnN0YWxsIGNlbnRvczcgYW5k IGdpdmUgaXQgYSB0cnkgYW5kIGdvdCB0aGlzIGVycm9yDQo+Pj4+Pj4+PiANCj4+Pj4+Pj4+IERF QlVHIE1FU1NBR0UgUkVDQVANCj4+Pj4+Pj4+ICoqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKg0KPj4+Pj4+Pj4gREVCVUc6IFtMb2FkIGxv Y2FsIHBhY2thZ2VzXQ0KPj4+Pj4+Pj4gKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqDQo+Pj4+Pj4+PiBBbGwgaXRlbXMgY29tcGxldGVkDQo+Pj4+Pj4+ PiANCj4+Pj4+Pj4+IFNhdHVyZGF5IDAzIEZlYnJ1YXJ5IDIwMTggIDIxOjA0OjA3IC0wNTAwICgw OjAwOjA0LjE3NSkNCj4+Pj4+Pj4+IDA6MTY6MTcuMjA0ICoqKioqDQo+Pj4+Pj4+PiANCj4+Pj4+ Pj4+ID09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT0NCj4+Pj4+Pj4+IHJlcG9fYnVpbGQgOiBDcmVhdGUg T3BlblN0YWNrLUFuc2libGUgcmVxdWlyZW1lbnQgd2hlZWxzIC0tLS0tLS0tLS0tLS0tDQo+Pj4+ Pj4+PiAyNjguMTZzDQo+Pj4+Pj4+PiByZXBvX2J1aWxkIDogV2FpdCBmb3IgdGhlIHZlbnZzIGJ1 aWxkcyB0byBjb21wbGV0ZSAtLS0tLS0tLS0tLS0tLS0tLS0tLQ0KPj4+Pj4+Pj4gMTEwLjMwcw0K Pj4+Pj4+Pj4gcmVwb19idWlsZCA6IEluc3RhbGwgcGFja2FnZXMgLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tDQo+Pj4+Pj4+PiA2OC4yNnMNCj4+Pj4+Pj4+IHJlcG9f YnVpbGQgOiBDbG9uZSBnaXQgcmVwb3NpdG9yaWVzIGFzeW5jaHJvbm91c2x5IC0tLS0tLS0tLS0t LS0tLS0tLS0tLQ0KPj4+Pj4+Pj4gNTkuODVzDQo+Pj4+Pj4+PiBwaXBfaW5zdGFsbCA6IEluc3Rh bGwgZGlzdHJvIHBhY2thZ2VzIC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0NCj4+ Pj4+Pj4+IDM2Ljcycw0KPj4+Pj4+Pj4gZ2FsZXJhX2NsaWVudCA6IEluc3RhbGwgZ2FsZXJhIGRp c3RybyBwYWNrYWdlcyAtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tDQo+Pj4+Pj4+PiAzMy4yMXMN Cj4+Pj4+Pj4+IGhhcHJveHlfc2VydmVyIDogQ3JlYXRlIGhhcHJveHkgc2VydmljZSBjb25maWcg ZmlsZXMgLS0tLS0tLS0tLS0tLS0tLS0tLQ0KPj4+Pj4+Pj4gMzAuODFzDQo+Pj4+Pj4+PiByZXBv X2J1aWxkIDogRXhlY3V0ZSB0aGUgdmVudiBidWlsZCBzY3JpcHRzIGFzeW5jaG9ub3VzbHkgLS0t LS0tLS0tLS0tLS0NCj4+Pj4+Pj4+IDI5LjY5cw0KPj4+Pj4+Pj4gcGlwX2luc3RhbGwgOiBJbnN0 YWxsIGRpc3RybyBwYWNrYWdlcyAtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tDQo+ Pj4+Pj4+PiAyMy41NnMNCj4+Pj4+Pj4+IHJlcG9fc2VydmVyIDogSW5zdGFsbCByZXBvIHNlcnZl ciBwYWNrYWdlcyAtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQ0KPj4+Pj4+Pj4gMjAuMTFz DQo+Pj4+Pj4+PiBtZW1jYWNoZWRfc2VydmVyIDogSW5zdGFsbCBkaXN0cm8gcGFja2FnZXMgLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0NCj4+Pj4+Pj4+IDE2LjM1cw0KPj4+Pj4+Pj4gcmVw b19idWlsZCA6IENyZWF0ZSB2ZW52IGJ1aWxkIG9wdGlvbnMgZmlsZXMgLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tDQo+Pj4+Pj4+PiAxNC41N3MNCj4+Pj4+Pj4+IGhhcHJveHlfc2VydmVyIDog SW5zdGFsbCBIQVByb3h5IFBhY2thZ2VzDQo+Pj4+Pj4+PiAtLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tIDguMzVzDQo+Pj4+Pj4+PiByc3lzbG9nX2NsaWVudCA6IEluc3RhbGwgcnN5c2xv ZyBwYWNrYWdlcw0KPj4+Pj4+Pj4gLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLSA4LjMz cw0KPj4+Pj4+Pj4gcnN5c2xvZ19jbGllbnQgOiBJbnN0YWxsIHJzeXNsb2cgcGFja2FnZXMNCj4+ Pj4+Pj4+IC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0gNy42NHMNCj4+Pj4+Pj4+IHJz eXNsb2dfY2xpZW50IDogSW5zdGFsbCByc3lzbG9nIHBhY2thZ2VzDQo+Pj4+Pj4+PiAtLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tIDcuNDJzDQo+Pj4+Pj4+PiByZXBvX2J1aWxkIDogV2Fp dCBmb3IgZ2l0IGNsb25lcyB0byBjb21wbGV0ZQ0KPj4+Pj4+Pj4gLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLSA3LjI1cw0KPj4+Pj4+Pj4gcmVwb19zZXJ2ZXIgOiBJbnN0YWxsIHJlcG8gY2Fj aGluZyBzZXJ2ZXIgcGFja2FnZXMNCj4+Pj4+Pj4+IC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0gNC43 NnMNCj4+Pj4+Pj4+IGdhbGVyYV9zZXJ2ZXIgOiBDaGVjayB0aGF0IFdTUkVQIGlzIHJlYWR5DQo+ Pj4+Pj4+PiAtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tIDQuMThzDQo+Pj4+Pj4+PiBy ZXBvX3NlcnZlciA6IEdpdCBzZXJ2aWNlIGRhdGEgZm9sZGVyIHNldHVwDQo+Pj4+Pj4+PiAtLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLSA0LjA0cw0KPj4+Pj4+Pj4gKysgZXhpdF9mYWlsIDM0 MSAwDQo+Pj4+Pj4+PiArKyBzZXQgK3gNCj4+Pj4+Pj4+ICsrIGluZm9fYmxvY2sgJ0Vycm9yIElu Zm8gLSAzNDEnIDANCj4+Pj4+Pj4+ICsrIGVjaG8NCj4+Pj4+Pj4+IC0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0NCj4+ Pj4+Pj4+IC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0NCj4+Pj4+Pj4+ICsrIHByaW50X2luZm8gJ0Vycm9yIEluZm8g LSAzNDEnIDANCj4+Pj4+Pj4+ICsrIFBST0NfTkFNRT0nLSBbIEVycm9yIEluZm8gLSAzNDEgMCBd IC0nDQo+Pj4+Pj4+PiArKyBwcmludGYgJ1xuJXMlc1xuJyAnLSBbIEVycm9yIEluZm8gLSAzNDEg MCBdIC0nDQo+Pj4+Pj4+PiAtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLQ0KPj4+Pj4+Pj4gDQo+Pj4+Pj4+PiAtIFsgRXJyb3IgSW5mbyAtIDM0MSAwIF0gLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tDQo+Pj4+Pj4+PiArKyBlY2hv DQo+Pj4+Pj4+PiAtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tDQo+Pj4+Pj4+PiAtLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tDQo+Pj4+Pj4+ PiArKyBleGl0X3N0YXRlIDENCj4+Pj4+Pj4+ICsrIHNldCAreA0KPj4+Pj4+Pj4gLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLQ0KPj4+Pj4+Pj4gDQo+Pj4+Pj4+PiAtIFsgUnVuIFRpbWUgPSAyMDMwIHNlY29uZHMgfHwg MzMgbWludXRlcyBdIC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tDQo+Pj4+Pj4+PiAtLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tDQo+Pj4+Pj4+PiAtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tDQo+Pj4+Pj4+PiANCj4+Pj4+Pj4+IC0gWyBT dGF0dXM6IEZhaWx1cmUgXSAtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0NCj4+Pj4+Pj4+IC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0NCj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4gDQo+ Pj4+Pj4+PiANCj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4gSSBkb24ndCBrbm93IHdoeSBpdCBmYWlsZWQN Cj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4gYnV0IGkgdHJpZWQgZm9sbG93aW5nOg0KPj4+Pj4+Pj4gDQo+ Pj4+Pj4+PiBbcm9vdEBhaW8gfl0jIGx4Yy1scyAtZg0KPj4+Pj4+Pj4gTkFNRSAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICBTVEFURSAgIEFVVE9TVEFSVCBHUk9VUFMNCj4+ Pj4+Pj4+ICAgICAgIElQVjQgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgSVBWNg0KPj4+Pj4+Pj4gYWlvMV9jaW5kZXJfYXBpX2NvbnRhaW5lci0yYWY0ZGQwMSAgICAg ICAgICBSVU5OSU5HIDEgICAgICAgICBvbmJvb3QsDQo+Pj4+Pj4+PiBvcGVuc3RhY2sgMTAuMjU1 LjI1NS42MiwgMTcyLjI5LjIzOC4yMTAsIDE3Mi4yOS4yNDQuMTUyICAtDQo+Pj4+Pj4+PiBhaW8x X2NpbmRlcl9zY2hlZHVsZXJfY29udGFpbmVyLTQ1NGRiMWZiICAgIFJVTk5JTkcgMSAgICAgICAg IG9uYm9vdCwNCj4+Pj4+Pj4+IG9wZW5zdGFjayAxMC4yNTUuMjU1LjExNywgMTcyLjI5LjIzOS4x NzIgICAgICAgICAgICAgICAgIC0NCj4+Pj4+Pj4+IGFpbzFfZGVzaWduYXRlX2NvbnRhaW5lci1m N2VhM2Y3MyAgICAgICAgICAgUlVOTklORyAxICAgICAgICAgb25ib290LA0KPj4+Pj4+Pj4gb3Bl bnN0YWNrIDEwLjI1NS4yNTUuMjM1LCAxNzIuMjkuMjM5LjE2NiAgICAgICAgICAgICAgICAgLQ0K Pj4+Pj4+Pj4gYWlvMV9nYWxlcmFfY29udGFpbmVyLTRmNDg4ZjZhICAgICAgICAgICAgICBSVU5O SU5HIDEgICAgICAgICBvbmJvb3QsDQo+Pj4+Pj4+PiBvcGVuc3RhY2sgMTAuMjU1LjI1NS4xOTMs IDE3Mi4yOS4yMzYuNjkgICAgICAgICAgICAgICAgICAtDQo+Pj4+Pj4+PiBhaW8xX2dsYW5jZV9j b250YWluZXItZjhjYWE5ZTYgICAgICAgICAgICAgIFJVTk5JTkcgMSAgICAgICAgIG9uYm9vdCwN Cj4+Pj4+Pj4+IG9wZW5zdGFjayAxMC4yNTUuMjU1LjIyNSwgMTcyLjI5LjIzOS41MiwgMTcyLjI5 LjI0Ni4yNSAgIC0NCj4+Pj4+Pj4+IGFpbzFfaGVhdF9hcGlfY29udGFpbmVyLTgzMjFhNzYzICAg ICAgICAgICAgUlVOTklORyAxICAgICAgICAgb25ib290LA0KPj4+Pj4+Pj4gb3BlbnN0YWNrIDEw LjI1NS4yNTUuMTA0LCAxNzIuMjkuMjM2LjE4NiAgICAgICAgICAgICAgICAgLQ0KPj4+Pj4+Pj4g YWlvMV9oZWF0X2FwaXNfY29udGFpbmVyLTNmNzBhZDc0ICAgICAgICAgICBSVU5OSU5HIDEgICAg ICAgICBvbmJvb3QsDQo+Pj4+Pj4+PiBvcGVuc3RhY2sgMTAuMjU1LjI1NS4xNjYsIDE3Mi4yOS4y MzkuMTMgICAgICAgICAgICAgICAgICAtDQo+Pj4+Pj4+PiBhaW8xX2hlYXRfZW5naW5lX2NvbnRh aW5lci1hMThlNWEwYSAgICAgICAgIFJVTk5JTkcgMSAgICAgICAgIG9uYm9vdCwNCj4+Pj4+Pj4+ IG9wZW5zdGFjayAxMC4yNTUuMjU1LjExOCwgMTcyLjI5LjIzOC43ICAgICAgICAgICAgICAgICAg IC0NCj4+Pj4+Pj4+IGFpbzFfaG9yaXpvbl9jb250YWluZXItZTQ5MzI3NWMgICAgICAgICAgICAg UlVOTklORyAxICAgICAgICAgb25ib290LA0KPj4+Pj4+Pj4gb3BlbnN0YWNrIDEwLjI1NS4yNTUu OTgsIDE3Mi4yOS4yMzcuNDMgICAgICAgICAgICAgICAgICAgLQ0KPj4+Pj4+Pj4gYWlvMV9rZXlz dG9uZV9jb250YWluZXItYzBlMjNlMTQgICAgICAgICAgICBSVU5OSU5HIDEgICAgICAgICBvbmJv b3QsDQo+Pj4+Pj4+PiBvcGVuc3RhY2sgMTAuMjU1LjI1NS42MCwgMTcyLjI5LjIzNy4xNjUgICAg ICAgICAgICAgICAgICAtDQo+Pj4+Pj4+PiBhaW8xX21lbWNhY2hlZF9jb250YWluZXItZWY4ZmVk NGMgICAgICAgICAgIFJVTk5JTkcgMSAgICAgICAgIG9uYm9vdCwNCj4+Pj4+Pj4+IG9wZW5zdGFj ayAxMC4yNTUuMjU1LjIxNCwgMTcyLjI5LjIzOC4yMTEgICAgICAgICAgICAgICAgIC0NCj4+Pj4+ Pj4+IGFpbzFfbmV1dHJvbl9hZ2VudHNfY29udGFpbmVyLTEzMWU5OTZlICAgICAgUlVOTklORyAx ICAgICAgICAgb25ib290LA0KPj4+Pj4+Pj4gb3BlbnN0YWNrIDEwLjI1NS4yNTUuMTUzLCAxNzIu MjkuMjM3LjI0NiwgMTcyLjI5LjI0My4yMjcgLQ0KPj4+Pj4+Pj4gYWlvMV9uZXV0cm9uX3NlcnZl cl9jb250YWluZXItY2NkNjkzOTQgICAgICBSVU5OSU5HIDEgICAgICAgICBvbmJvb3QsDQo+Pj4+ Pj4+PiBvcGVuc3RhY2sgMTAuMjU1LjI1NS4yNywgMTcyLjI5LjIzNi4xMjkgICAgICAgICAgICAg ICAgICAtDQo+Pj4+Pj4+PiBhaW8xX25vdmFfYXBpX2NvbnRhaW5lci03MzI3NDAyNCAgICAgICAg ICAgIFJVTk5JTkcgMSAgICAgICAgIG9uYm9vdCwNCj4+Pj4+Pj4+IG9wZW5zdGFjayAxMC4yNTUu MjU1LjQyLCAxNzIuMjkuMjM4LjIwMSAgICAgICAgICAgICAgICAgIC0NCj4+Pj4+Pj4+IGFpbzFf bm92YV9hcGlfbWV0YWRhdGFfY29udGFpbmVyLWExZDMyMjgyICAgUlVOTklORyAxICAgICAgICAg b25ib290LA0KPj4+Pj4+Pj4gb3BlbnN0YWNrIDEwLjI1NS4yNTUuMjE4LCAxNzIuMjkuMjM4LjE1 MyAgICAgICAgICAgICAgICAgLQ0KPj4+Pj4+Pj4gYWlvMV9ub3ZhX2FwaV9vc19jb21wdXRlX2Nv bnRhaW5lci01MjcyNTk0MCBSVU5OSU5HIDEgICAgICAgICBvbmJvb3QsDQo+Pj4+Pj4+PiBvcGVu c3RhY2sgMTAuMjU1LjI1NS4xMDksIDE3Mi4yOS4yMzYuMTI2ICAgICAgICAgICAgICAgICAtDQo+ Pj4+Pj4+PiBhaW8xX25vdmFfYXBpX3BsYWNlbWVudF9jb250YWluZXItMDU4ZTgwMzEgIFJVTk5J TkcgMSAgICAgICAgIG9uYm9vdCwNCj4+Pj4+Pj4+IG9wZW5zdGFjayAxMC4yNTUuMjU1LjI5LCAx NzIuMjkuMjM2LjE1NyAgICAgICAgICAgICAgICAgIC0NCj4+Pj4+Pj4+IGFpbzFfbm92YV9jb25k dWN0b3JfY29udGFpbmVyLTliNmIyMDhjICAgICAgUlVOTklORyAxICAgICAgICAgb25ib290LA0K Pj4+Pj4+Pj4gb3BlbnN0YWNrIDEwLjI1NS4yNTUuMTgsIDE3Mi4yOS4yMzkuOSAgICAgICAgICAg ICAgICAgICAgLQ0KPj4+Pj4+Pj4gYWlvMV9ub3ZhX2NvbnNvbGVfY29udGFpbmVyLTBmYjg5OTVj ICAgICAgICBSVU5OSU5HIDEgICAgICAgICBvbmJvb3QsDQo+Pj4+Pj4+PiBvcGVuc3RhY2sgMTAu MjU1LjI1NS40NywgMTcyLjI5LjIzNy4xMjkgICAgICAgICAgICAgICAgICAtDQo+Pj4+Pj4+PiBh aW8xX25vdmFfc2NoZWR1bGVyX2NvbnRhaW5lci04ZjdhNjU3YSAgICAgIFJVTk5JTkcgMSAgICAg ICAgIG9uYm9vdCwNCj4+Pj4+Pj4+IG9wZW5zdGFjayAxMC4yNTUuMjU1LjE5NSwgMTcyLjI5LjIz OC4xMTMgICAgICAgICAgICAgICAgIC0NCj4+Pj4+Pj4+IGFpbzFfcmFiYml0X21xX2NvbnRhaW5l ci1jMzQ1MGQ2NiAgICAgICAgICAgUlVOTklORyAxICAgICAgICAgb25ib290LA0KPj4+Pj4+Pj4g b3BlbnN0YWNrIDEwLjI1NS4yNTUuMTExLCAxNzIuMjkuMjM3LjIwMiAgICAgICAgICAgICAgICAg LQ0KPj4+Pj4+Pj4gYWlvMV9yZXBvX2NvbnRhaW5lci04ZTA3ZmRlZiAgICAgICAgICAgICAgICBS VU5OSU5HIDEgICAgICAgICBvbmJvb3QsDQo+Pj4+Pj4+PiBvcGVuc3RhY2sgMTAuMjU1LjI1NS4x NDEsIDE3Mi4yOS4yMzkuNzkgICAgICAgICAgICAgICAgICAtDQo+Pj4+Pj4+PiBhaW8xX3JzeXNs b2dfY29udGFpbmVyLWIxOThmYmU1ICAgICAgICAgICAgIFJVTk5JTkcgMSAgICAgICAgIG9uYm9v dCwNCj4+Pj4+Pj4+IG9wZW5zdGFjayAxMC4yNTUuMjU1LjEzLCAxNzIuMjkuMjM2LjE5NSAgICAg ICAgICAgICAgICAgIC0NCj4+Pj4+Pj4+IGFpbzFfc3dpZnRfcHJveHlfY29udGFpbmVyLTFhMzUz NmUxICAgICAgICAgUlVOTklORyAxICAgICAgICAgb25ib290LA0KPj4+Pj4+Pj4gb3BlbnN0YWNr IDEwLjI1NS4yNTUuMTA4LCAxNzIuMjkuMjM3LjMxLCAxNzIuMjkuMjQ0LjI0OCAgLQ0KPj4+Pj4+ Pj4gYWlvMV91dGlsaXR5X2NvbnRhaW5lci1iZDEwNmYxMSAgICAgICAgICAgICBSVU5OSU5HIDEg ICAgICAgICBvbmJvb3QsDQo+Pj4+Pj4+PiBvcGVuc3RhY2sgMTAuMjU1LjI1NS41NCwgMTcyLjI5 LjIzOS4xMjQgICAgICAgICAgICAgICAgICAtDQo+Pj4+Pj4+PiBbcm9vdEBhaW8gfl0jIGx4Yy1h DQo+Pj4+Pj4+PiBseGMtYXR0YWNoICAgICBseGMtYXV0b3N0YXJ0DQo+Pj4+Pj4+PiBbcm9vdEBh aW8gfl0jIGx4Yy1hdHRhY2ggLW4gYWlvMV91dGlsaXR5X2NvbnRhaW5lci1iZDEwNmYxMQ0KPj4+ Pj4+Pj4gW3Jvb3RAYWlvMS11dGlsaXR5LWNvbnRhaW5lci1iZDEwNmYxMSB+XSMNCj4+Pj4+Pj4+ IFtyb290QGFpbzEtdXRpbGl0eS1jb250YWluZXItYmQxMDZmMTEgfl0jIHNvdXJjZSAvcm9vdC9v cGVucmMNCj4+Pj4+Pj4+IFtyb290QGFpbzEtdXRpbGl0eS1jb250YWluZXItYmQxMDZmMTEgfl0j IG9wZW5zdGFjaw0KPj4+Pj4+Pj4gb3BlbnN0YWNrICAgICAgICAgICAgICAgICAgICAgICAgIG9w ZW5zdGFjay1ob3N0LWhvc3RmaWxlLXNldHVwLnNoDQo+Pj4+Pj4+PiBbcm9vdEBhaW8xLXV0aWxp dHktY29udGFpbmVyLWJkMTA2ZjExIH5dIyBvcGVuc3RhY2sNCj4+Pj4+Pj4+IG9wZW5zdGFjayAg ICAgICAgICAgICAgICAgICAgICAgICBvcGVuc3RhY2staG9zdC1ob3N0ZmlsZS1zZXR1cC5zaA0K Pj4+Pj4+Pj4gW3Jvb3RAYWlvMS11dGlsaXR5LWNvbnRhaW5lci1iZDEwNmYxMSB+XSMgb3BlbnN0 YWNrIHVzZXIgbGlzdA0KPj4+Pj4+Pj4gRmFpbGVkIHRvIGRpc2NvdmVyIGF2YWlsYWJsZSBpZGVu dGl0eSB2ZXJzaW9ucyB3aGVuIGNvbnRhY3RpbmcNCj4+Pj4+Pj4+IGh0dHA6Ly8xNzIuMjkuMjM2 LjEwMDo1MDAwL3YzLiBBdHRlbXB0aW5nIHRvIHBhcnNlIHZlcnNpb24gZnJvbSBVUkwuDQo+Pj4+ Pj4+PiBTZXJ2aWNlIFVuYXZhaWxhYmxlIChIVFRQIDUwMykNCj4+Pj4+Pj4+IFtyb290QGFpbzEt dXRpbGl0eS1jb250YWluZXItYmQxMDZmMTEgfl0jDQo+Pj4+Pj4+PiANCj4+Pj4+Pj4+IA0KPj4+ Pj4+Pj4gbm90IHN1cmUgd2hhdCBpcyB0aGlzIGVycm9yID8NCj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4g DQo+Pj4+Pj4+PiBPbiBTYXQsIEZlYiAzLCAyMDE4IGF0IDc6MjkgUE0sIFNhdGlzaCBQYXRlbCA8 c2F0aXNoLnR4dEBnbWFpbC5jb20+DQo+Pj4+Pj4+PiB3cm90ZToNCj4+Pj4+Pj4+PiBJIGhhdmUg dGlyZWQgZXZlcnl0aGluZyBidXQgZGlkbid0IGFibGUgdG8gZmluZCBzb2x1dGlvbiA6KCAgd2hh dCBpIGFtDQo+Pj4+Pj4+Pj4gZG9pbmcgd3JvbmcgaGVyZSwgaSBhbSBmb2xsb3dpbmcgdGhpcyBp bnN0cnVjdGlvbiBhbmQgcGxlYXNlIGxldCBtZQ0KPj4+Pj4+Pj4+IGtub3cgaWYgaSBhbSB3cm9u Zw0KPj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4+IGh0dHBzOi8vZGV2ZWxvcGVyLnJh Y2tzcGFjZS5jb20vYmxvZy9saWZlLXdpdGhvdXQtZGV2c3RhY2stb3BlbnN0YWNrLWRldmVsb3Bt ZW50LXdpdGgtb3NhLw0KPj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4+IEkgaGF2ZSBDZW50T1M3LCB3aXRo IDggQ1BVIGFuZCAxNkdCIG1lbW9yeSB3aXRoIDEwMEdCIGRpc2sgc2l6ZS4NCj4+Pj4+Pj4+PiAN Cj4+Pj4+Pj4+PiBFcnJvcjogaHR0cDovL3Bhc3RlLm9wZW5zdGFjay5vcmcvc2hvdy82NjA0OTcv DQo+Pj4+Pj4+Pj4gDQo+Pj4+Pj4+Pj4gDQo+Pj4+Pj4+Pj4gSSBoYXZlIHRpcmVkIGdhdGUtY2hl Y2stY29tbWl0LnNoIGJ1dCBzYW1lIGVycm9yIDooDQo+Pj4+Pj4+Pj4gDQo+Pj4+Pj4+Pj4gDQo+ Pj4+Pj4+Pj4gDQo+Pj4+Pj4+Pj4gT24gU2F0LCBGZWIgMywgMjAxOCBhdCAxOjExIEFNLCBTYXRp c2ggUGF0ZWwgPHNhdGlzaC50eHRAZ21haWwuY29tPg0KPj4+Pj4+Pj4+IHdyb3RlOg0KPj4+Pj4+ Pj4+PiBJIGhhdmUgc3RhcnRlZCBwbGF5aW5nIHdpdGggb3BlbnN0YWNrLWFuc2libGUgb24gQ2Vu dE9TNyBhbmQgdHJ5aW5nIHRvDQo+Pj4+Pj4+Pj4+IGluc3RhbGwgQWxsLWluLW9uZSBidXQgZ290 IHRoaXMgZXJyb3IgYW5kIG5vdCBzdXJlIHdoYXQgY2F1c2UgdGhhdA0KPj4+Pj4+Pj4+PiBlcnJv ciBob3cgZG8gaSB0cm91Ymxlc2hvb3QgaXQ/DQo+Pj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4+PiANCj4+ Pj4+Pj4+Pj4gVEFTSyBbYm9vdHN0cmFwLWhvc3QgOiBSZW1vdmUgYW4gZXhpc3RpbmcgcHJpdmF0 ZS9wdWJsaWMgc3NoIGtleXMgaWYNCj4+Pj4+Pj4+Pj4gb25lIGlzIG1pc3NpbmddDQo+Pj4+Pj4+ Pj4+IA0KPj4+Pj4+Pj4+PiAqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioNCj4+Pj4+Pj4+Pj4gc2tpcHBpbmc6IFts b2NhbGhvc3RdID0+IChpdGVtPWlkX3JzYSkNCj4+Pj4+Pj4+Pj4gc2tpcHBpbmc6IFtsb2NhbGhv c3RdID0+IChpdGVtPWlkX3JzYS5wdWIpDQo+Pj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4+PiBUQVNLIFti b290c3RyYXAtaG9zdCA6IENyZWF0ZSBzc2gga2V5IHBhaXIgZm9yIHJvb3RdDQo+Pj4+Pj4+Pj4+ IA0KPj4+Pj4+Pj4+PiAqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKg0KPj4+Pj4+Pj4+PiBvazogW2xvY2FsaG9zdF0NCj4+Pj4+Pj4+Pj4gDQo+Pj4+Pj4+Pj4+ IFRBU0sgW2Jvb3RzdHJhcC1ob3N0IDogRmV0Y2ggdGhlIGdlbmVyYXRlZCBwdWJsaWMgc3NoIGtl eV0NCj4+Pj4+Pj4+Pj4gDQo+Pj4+Pj4+Pj4+ICoqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqDQo+Pj4+Pj4+Pj4+IGNoYW5nZWQ6IFtsb2NhbGhvc3RdDQo+Pj4+Pj4+Pj4+ IA0KPj4+Pj4+Pj4+PiBUQVNLIFtib290c3RyYXAtaG9zdCA6IEVuc3VyZSByb290J3MgbmV3IHB1 YmxpYyBzc2gga2V5IGlzIGluDQo+Pj4+Pj4+Pj4+IGF1dGhvcml6ZWRfa2V5c10NCj4+Pj4+Pj4+ Pj4gDQo+Pj4+Pj4+Pj4+ICoqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKg0KPj4+Pj4+Pj4+PiBvazogW2xv Y2FsaG9zdF0NCj4+Pj4+Pj4+Pj4gDQo+Pj4+Pj4+Pj4+IFRBU0sgW2Jvb3RzdHJhcC1ob3N0IDog Q3JlYXRlIHRoZSByZXF1aXJlZCBkZXBsb3ltZW50IGRpcmVjdG9yaWVzXQ0KPj4+Pj4+Pj4+PiAN Cj4+Pj4+Pj4+Pj4gKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqDQo+Pj4+Pj4+Pj4+ IGNoYW5nZWQ6IFtsb2NhbGhvc3RdID0+IChpdGVtPS9ldGMvb3BlbnN0YWNrX2RlcGxveSkNCj4+ Pj4+Pj4+Pj4gY2hhbmdlZDogW2xvY2FsaG9zdF0gPT4gKGl0ZW09L2V0Yy9vcGVuc3RhY2tfZGVw bG95L2NvbmYuZCkNCj4+Pj4+Pj4+Pj4gY2hhbmdlZDogW2xvY2FsaG9zdF0gPT4gKGl0ZW09L2V0 Yy9vcGVuc3RhY2tfZGVwbG95L2Vudi5kKQ0KPj4+Pj4+Pj4+PiANCj4+Pj4+Pj4+Pj4gVEFTSyBb Ym9vdHN0cmFwLWhvc3QgOiBEZXBsb3kgdXNlciBjb25mLmQgY29uZmlndXJhdGlvbl0NCj4+Pj4+ Pj4+Pj4gDQo+Pj4+Pj4+Pj4+ICoqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioNCj4+Pj4+Pj4+Pj4gZmF0YWw6IFtsb2NhbGhvc3RdOiBGQUlMRUQhID0+IHsibXNnIjog Int7DQo+Pj4+Pj4+Pj4+IGNvbmZkX292ZXJyaWRlc1tib290c3RyYXBfaG9zdF9zY2VuYXJpb10g fX06ICdkaWN0IG9iamVjdCcgaGFzIG5vDQo+Pj4+Pj4+Pj4+IGF0dHJpYnV0ZSB1J2FpbycifQ0K Pj4+Pj4+Pj4+PiANCj4+Pj4+Pj4+Pj4gUlVOTklORyBIQU5ETEVSIFtzc2hkIDogUmVsb2FkIHRo ZSBTU0ggc2VydmljZV0NCj4+Pj4+Pj4+Pj4gDQo+Pj4+Pj4+Pj4+ICoqKioqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioNCj4+Pj4+Pj4+Pj4gICAgICB0byBy ZXRyeSwgdXNlOiAtLWxpbWl0DQo+Pj4+Pj4+Pj4+IEAvb3B0L29wZW5zdGFjay1hbnNpYmxlL3Rl c3RzL2Jvb3RzdHJhcC1haW8ucmV0cnkNCj4+Pj4+Pj4+Pj4gDQo+Pj4+Pj4+Pj4+IFBMQVkgUkVD QVANCj4+Pj4+Pj4+Pj4gKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioNCj4+Pj4+Pj4+ Pj4gbG9jYWxob3N0ICAgICAgICAgICAgICAgICAgOiBvaz02MSAgIGNoYW5nZWQ9MzYgICB1bnJl YWNoYWJsZT0wDQo+Pj4+Pj4+Pj4+IGZhaWxlZD0yDQo+Pj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4+PiBb cm9vdEBhaW8gb3BlbnN0YWNrLWFuc2libGVdIw0KPj4+Pj4+PiANCj4+Pj4+Pj4gX19fX19fX19f X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18NCj4+Pj4+Pj4gTWFpbGluZyBs aXN0Og0KPj4+Pj4+PiBodHRwOi8vbGlzdHMub3BlbnN0YWNrLm9yZy9jZ2ktYmluL21haWxtYW4v bGlzdGluZm8vb3BlbnN0YWNrDQo+Pj4+Pj4+IFBvc3QgdG8gICAgIDogb3BlbnN0YWNrQGxpc3Rz Lm9wZW5zdGFjay5vcmcNCj4+Pj4+Pj4gVW5zdWJzY3JpYmUgOg0KPj4+Pj4+PiBodHRwOi8vbGlz dHMub3BlbnN0YWNrLm9yZy9jZ2ktYmluL21haWxtYW4vbGlzdGluZm8vb3BlbnN0YWNrDQo+Pj4+ Pj4gDQo+Pj4+Pj4gDQo+Pj4+PiANCj4+Pj4+IF9fX19fX19fX19fX19fX19fX19fX19fX19fX19f X19fX19fX19fX19fX19fX19fDQo+Pj4+PiBNYWlsaW5nIGxpc3Q6IGh0dHA6Ly9saXN0cy5vcGVu c3RhY2sub3JnL2NnaS1iaW4vbWFpbG1hbi9saXN0aW5mby9vcGVuc3RhY2sNCj4+Pj4+IFBvc3Qg dG8gICAgIDogb3BlbnN0YWNrQGxpc3RzLm9wZW5zdGFjay5vcmcNCj4+Pj4+IFVuc3Vic2NyaWJl IDogaHR0cDovL2xpc3RzLm9wZW5zdGFjay5vcmcvY2dpLWJpbi9tYWlsbWFuL2xpc3RpbmZvL29w ZW5zdGFjaw0K --=_99d8f260a69d7936d3ced6f3be6bc645-- From satish.txt at gmail.com Mon Feb 5 04:53:46 2018 From: satish.txt at gmail.com (Satish Patel) Date: Sun, 4 Feb 2018 23:53:46 -0500 Subject: [Openstack] openstack-ansible aio error In-Reply-To: <7CD76E10-3819-4C9E-BEA3-571F3FBACD6C@italy1.com> References: <2D5C1208-BA0A-491D-B232-6B0AB3EBC8CF@italy1.com> <1C807435-61CA-4A09-AD82-C6F3B91CEC03@italy1.com> <7CD76E10-3819-4C9E-BEA3-571F3FBACD6C@italy1.com> Message-ID: Now i got this error when i am running following command, on my CentOS i do have "libselinux-python" installed but still ansible saying it is not installed. i have submit bug but lets see $openstack-ansible setup-hosts.yml TASK [openstack_hosts : include] **************************************************************************************************************************************************** Sunday 04 February 2018 23:49:29 -0500 (0:00:00.481) 0:00:31.439 ******* included: /etc/ansible/roles/openstack_hosts/tasks/openstack_update_hosts_file.yml for aio1 TASK [openstack_hosts : Drop hosts file entries script locally] ********************************************************************************************************************* Sunday 04 February 2018 23:49:30 -0500 (0:00:00.074) 0:00:31.513 ******* fatal: [aio1 -> localhost]: FAILED! => {"changed": true, "failed": true, "msg": "Aborting, target uses selinux but python bindings (libselinux-python) aren't installed!"} NO MORE HOSTS LEFT ****************************************************************************************************************************************************************** NO MORE HOSTS LEFT ****************************************************************************************************************************************************************** PLAY RECAP ************************************************************************************************************************************************************************** aio1 : ok=16 changed=7 unreachable=0 failed=1 Sunday 04 February 2018 23:49:41 -0500 (0:00:11.051) 0:00:42.565 ******* =============================================================================== openstack_hosts : Drop hosts file entries script locally --------------- 11.05s openstack_hosts : Install host packages -------------------------------- 11.03s openstack_hosts : Install EPEL, and yum priorities plugin --------------- 6.41s Install Python2 --------------------------------------------------------- 5.87s openstack_hosts : Download RDO repository RPM --------------------------- 2.60s openstack_hosts : Enable and set repo priorities ------------------------ 1.83s openstack_hosts : Install RDO repository and key ------------------------ 0.80s openstack_hosts : Disable requiretty for root sudo on centos ------------ 0.69s Gathering Facts --------------------------------------------------------- 0.59s openstack_hosts : Enable sysstat cron ----------------------------------- 0.48s openstack_hosts : Allow the usage of local facts ------------------------ 0.27s openstack_hosts : Disable yum fastestmirror plugin ---------------------- 0.26s openstack_hosts : Add global_environment_variables to environment file --- 0.24s Check for a supported Operating System ---------------------------------- 0.07s openstack_hosts : include ----------------------------------------------- 0.07s openstack_hosts : include ----------------------------------------------- 0.07s openstack_hosts : Gather variables for each operating system ------------ 0.05s apt_package_pinning : Add apt pin preferences --------------------------- 0.03s openstack_hosts : Remove conflicting distro packages -------------------- 0.02s openstack_hosts : Enable sysstat config --------------------------------- 0.01s [root at aio playbooks]# yum install libselinux-python Loaded plugins: priorities 243 packages excluded due to repository priority protections Package libselinux-python-2.5-11.el7.x86_64 already installed and latest version Nothing to do [root at aio playbooks]# On Sun, Feb 4, 2018 at 2:18 PM, wrote: > Not sure about that tripleo is very complicated and ready for production where ansible OpenStack is probably not. > > If you want to learn sure but let’s look at the facts production needs something more than what ansible OpenStack can now offer. > >> Il giorno 04 feb 2018, alle ore 11:14, Satish Patel ha scritto: >> >> :) >> >> I am going to try openstack-ansible and and if i am lucky i will >> continue and plan to deploy on production but if it will take too much >> my time to debug then i would go with tripleO which seems less >> complicated so far. >> >> As you said openstack-ansible has good ubuntu community and we are >> 100% CentOS shop and i want something which we are comfortable and >> supported by deployment tool. >> >> My first production cluster is 20 node but it may slowly grow if all goes well. >> >>> On Sun, Feb 4, 2018 at 2:05 PM, wrote: >>> Tripleo = ha >>> Packstack = no ha >>> >>>> Il giorno 04 feb 2018, alle ore 11:00, Satish Patel ha scritto: >>>> >>>> Just wondering why did you say we can't do HA with TripleO? I thought >>>> it does support HA. am i missing something here? >>>> >>>>> On Sun, Feb 4, 2018 at 11:21 AM, wrote: >>>>> What are you looking for ha? Etc. Tripleo is the way to go for that packstack if you want simple deployment but no ha of course. >>>>> >>>>>> Il giorno 04 feb 2018, alle ore 07:53, Satish Patel ha scritto: >>>>>> >>>>>> Hi Marcin, >>>>>> >>>>>> Thank you, i will try other link, also i am using CentOS7 but anyway >>>>>> now question is does openstack-ansible ready for production deployment >>>>>> despite galera issues and bug? >>>>>> >>>>>> If i want to go on production should i wait or find other tools to >>>>>> deploy on production? >>>>>> >>>>>>> On Sun, Feb 4, 2018 at 5:29 AM, Marcin Dulak wrote: >>>>>>> When playing with openstack-ansible do it in a virtual setup (e.g. nested >>>>>>> virtualization with libvirt) so you can reproducibly bring up your >>>>>>> environment from scratch. >>>>>>> You will have to do it multiple times. >>>>>>> >>>>>>> https://developer.rackspace.com/blog/life-without-devstack-openstack-development-with-osa/ >>>>>>> is more than 2 years old. >>>>>>> >>>>>>> Try to follow >>>>>>> https://docs.openstack.org/openstack-ansible/latest/contributor/quickstart-aio.html >>>>>>> but git clone the latest state of the openstack-ansible repo. >>>>>>> The above page has a link that can be used to submit bugs directly to the >>>>>>> openstack-ansible project at launchpad. >>>>>>> In this way you may be able to cleanup/improve the documentation, >>>>>>> and since your setup is the simplest possible one your bug reports may get >>>>>>> noticed and reproduced by the developers. >>>>>>> What happens is that most people try openstack-ansible, don't report bugs, >>>>>>> or report the bugs without the information neccesary >>>>>>> to reproduce them, and abandon the whole idea. >>>>>>> >>>>>>> Try to search >>>>>>> https://bugs.launchpad.net/openstack-ansible/+bugs?field.searchtext=galera >>>>>>> for inspiration about what to do. >>>>>>> Currently the galera setup in openstack-ansible, especially on centos7 seems >>>>>>> to be undergoing some critical changes. >>>>>>> Enter the galera container: >>>>>>> lxc-attach -n aio1_galera_container-4f488f6a >>>>>>> look around it, check whether mysqld is running etc., try to identify which >>>>>>> ansible tasks failed and run them manually inside of the container. >>>>>>> >>>>>>> Marcin >>>>>>> >>>>>>> >>>>>>>> On Sun, Feb 4, 2018 at 3:41 AM, Satish Patel wrote: >>>>>>>> >>>>>>>> I have noticed in output "aio1_galera_container" is failed, how do i >>>>>>>> fixed this kind of issue? >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> PLAY RECAP >>>>>>>> ************************************************************************************************************************************************************************** >>>>>>>> aio1 : ok=41 changed=4 unreachable=0 >>>>>>>> failed=0 >>>>>>>> aio1_cinder_api_container-2af4dd01 : ok=0 changed=0 >>>>>>>> unreachable=0 failed=0 >>>>>>>> aio1_cinder_scheduler_container-454db1fb : ok=0 changed=0 >>>>>>>> unreachable=0 failed=0 >>>>>>>> aio1_designate_container-f7ea3f73 : ok=0 changed=0 unreachable=0 >>>>>>>> failed=0 >>>>>>>> aio1_galera_container-4f488f6a : ok=32 changed=3 unreachable=0 >>>>>>>> failed=1 >>>>>>>> >>>>>>>>> On Sat, Feb 3, 2018 at 9:26 PM, Satish Patel wrote: >>>>>>>>> I have re-install centos7 and give it a try and got this error >>>>>>>>> >>>>>>>>> DEBUG MESSAGE RECAP >>>>>>>>> ************************************************************ >>>>>>>>> DEBUG: [Load local packages] >>>>>>>>> *************************************************** >>>>>>>>> All items completed >>>>>>>>> >>>>>>>>> Saturday 03 February 2018 21:04:07 -0500 (0:00:04.175) >>>>>>>>> 0:16:17.204 ***** >>>>>>>>> >>>>>>>>> =============================================================================== >>>>>>>>> repo_build : Create OpenStack-Ansible requirement wheels -------------- >>>>>>>>> 268.16s >>>>>>>>> repo_build : Wait for the venvs builds to complete -------------------- >>>>>>>>> 110.30s >>>>>>>>> repo_build : Install packages ------------------------------------------ >>>>>>>>> 68.26s >>>>>>>>> repo_build : Clone git repositories asynchronously --------------------- >>>>>>>>> 59.85s >>>>>>>>> pip_install : Install distro packages ---------------------------------- >>>>>>>>> 36.72s >>>>>>>>> galera_client : Install galera distro packages ------------------------- >>>>>>>>> 33.21s >>>>>>>>> haproxy_server : Create haproxy service config files ------------------- >>>>>>>>> 30.81s >>>>>>>>> repo_build : Execute the venv build scripts asynchonously -------------- >>>>>>>>> 29.69s >>>>>>>>> pip_install : Install distro packages ---------------------------------- >>>>>>>>> 23.56s >>>>>>>>> repo_server : Install repo server packages ----------------------------- >>>>>>>>> 20.11s >>>>>>>>> memcached_server : Install distro packages ----------------------------- >>>>>>>>> 16.35s >>>>>>>>> repo_build : Create venv build options files --------------------------- >>>>>>>>> 14.57s >>>>>>>>> haproxy_server : Install HAProxy Packages >>>>>>>>> ------------------------------- 8.35s >>>>>>>>> rsyslog_client : Install rsyslog packages >>>>>>>>> ------------------------------- 8.33s >>>>>>>>> rsyslog_client : Install rsyslog packages >>>>>>>>> ------------------------------- 7.64s >>>>>>>>> rsyslog_client : Install rsyslog packages >>>>>>>>> ------------------------------- 7.42s >>>>>>>>> repo_build : Wait for git clones to complete >>>>>>>>> ---------------------------- 7.25s >>>>>>>>> repo_server : Install repo caching server packages >>>>>>>>> ---------------------- 4.76s >>>>>>>>> galera_server : Check that WSREP is ready >>>>>>>>> ------------------------------- 4.18s >>>>>>>>> repo_server : Git service data folder setup >>>>>>>>> ----------------------------- 4.04s >>>>>>>>> ++ exit_fail 341 0 >>>>>>>>> ++ set +x >>>>>>>>> ++ info_block 'Error Info - 341' 0 >>>>>>>>> ++ echo >>>>>>>>> ---------------------------------------------------------------------- >>>>>>>>> ---------------------------------------------------------------------- >>>>>>>>> ++ print_info 'Error Info - 341' 0 >>>>>>>>> ++ PROC_NAME='- [ Error Info - 341 0 ] -' >>>>>>>>> ++ printf '\n%s%s\n' '- [ Error Info - 341 0 ] -' >>>>>>>>> -------------------------------------------- >>>>>>>>> >>>>>>>>> - [ Error Info - 341 0 ] --------------------------------------------- >>>>>>>>> ++ echo >>>>>>>>> ---------------------------------------------------------------------- >>>>>>>>> ---------------------------------------------------------------------- >>>>>>>>> ++ exit_state 1 >>>>>>>>> ++ set +x >>>>>>>>> ---------------------------------------------------------------------- >>>>>>>>> >>>>>>>>> - [ Run Time = 2030 seconds || 33 minutes ] -------------------------- >>>>>>>>> ---------------------------------------------------------------------- >>>>>>>>> ---------------------------------------------------------------------- >>>>>>>>> >>>>>>>>> - [ Status: Failure ] ------------------------------------------------ >>>>>>>>> ---------------------------------------------------------------------- >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> I don't know why it failed >>>>>>>>> >>>>>>>>> but i tried following: >>>>>>>>> >>>>>>>>> [root at aio ~]# lxc-ls -f >>>>>>>>> NAME STATE AUTOSTART GROUPS >>>>>>>>> IPV4 IPV6 >>>>>>>>> aio1_cinder_api_container-2af4dd01 RUNNING 1 onboot, >>>>>>>>> openstack 10.255.255.62, 172.29.238.210, 172.29.244.152 - >>>>>>>>> aio1_cinder_scheduler_container-454db1fb RUNNING 1 onboot, >>>>>>>>> openstack 10.255.255.117, 172.29.239.172 - >>>>>>>>> aio1_designate_container-f7ea3f73 RUNNING 1 onboot, >>>>>>>>> openstack 10.255.255.235, 172.29.239.166 - >>>>>>>>> aio1_galera_container-4f488f6a RUNNING 1 onboot, >>>>>>>>> openstack 10.255.255.193, 172.29.236.69 - >>>>>>>>> aio1_glance_container-f8caa9e6 RUNNING 1 onboot, >>>>>>>>> openstack 10.255.255.225, 172.29.239.52, 172.29.246.25 - >>>>>>>>> aio1_heat_api_container-8321a763 RUNNING 1 onboot, >>>>>>>>> openstack 10.255.255.104, 172.29.236.186 - >>>>>>>>> aio1_heat_apis_container-3f70ad74 RUNNING 1 onboot, >>>>>>>>> openstack 10.255.255.166, 172.29.239.13 - >>>>>>>>> aio1_heat_engine_container-a18e5a0a RUNNING 1 onboot, >>>>>>>>> openstack 10.255.255.118, 172.29.238.7 - >>>>>>>>> aio1_horizon_container-e493275c RUNNING 1 onboot, >>>>>>>>> openstack 10.255.255.98, 172.29.237.43 - >>>>>>>>> aio1_keystone_container-c0e23e14 RUNNING 1 onboot, >>>>>>>>> openstack 10.255.255.60, 172.29.237.165 - >>>>>>>>> aio1_memcached_container-ef8fed4c RUNNING 1 onboot, >>>>>>>>> openstack 10.255.255.214, 172.29.238.211 - >>>>>>>>> aio1_neutron_agents_container-131e996e RUNNING 1 onboot, >>>>>>>>> openstack 10.255.255.153, 172.29.237.246, 172.29.243.227 - >>>>>>>>> aio1_neutron_server_container-ccd69394 RUNNING 1 onboot, >>>>>>>>> openstack 10.255.255.27, 172.29.236.129 - >>>>>>>>> aio1_nova_api_container-73274024 RUNNING 1 onboot, >>>>>>>>> openstack 10.255.255.42, 172.29.238.201 - >>>>>>>>> aio1_nova_api_metadata_container-a1d32282 RUNNING 1 onboot, >>>>>>>>> openstack 10.255.255.218, 172.29.238.153 - >>>>>>>>> aio1_nova_api_os_compute_container-52725940 RUNNING 1 onboot, >>>>>>>>> openstack 10.255.255.109, 172.29.236.126 - >>>>>>>>> aio1_nova_api_placement_container-058e8031 RUNNING 1 onboot, >>>>>>>>> openstack 10.255.255.29, 172.29.236.157 - >>>>>>>>> aio1_nova_conductor_container-9b6b208c RUNNING 1 onboot, >>>>>>>>> openstack 10.255.255.18, 172.29.239.9 - >>>>>>>>> aio1_nova_console_container-0fb8995c RUNNING 1 onboot, >>>>>>>>> openstack 10.255.255.47, 172.29.237.129 - >>>>>>>>> aio1_nova_scheduler_container-8f7a657a RUNNING 1 onboot, >>>>>>>>> openstack 10.255.255.195, 172.29.238.113 - >>>>>>>>> aio1_rabbit_mq_container-c3450d66 RUNNING 1 onboot, >>>>>>>>> openstack 10.255.255.111, 172.29.237.202 - >>>>>>>>> aio1_repo_container-8e07fdef RUNNING 1 onboot, >>>>>>>>> openstack 10.255.255.141, 172.29.239.79 - >>>>>>>>> aio1_rsyslog_container-b198fbe5 RUNNING 1 onboot, >>>>>>>>> openstack 10.255.255.13, 172.29.236.195 - >>>>>>>>> aio1_swift_proxy_container-1a3536e1 RUNNING 1 onboot, >>>>>>>>> openstack 10.255.255.108, 172.29.237.31, 172.29.244.248 - >>>>>>>>> aio1_utility_container-bd106f11 RUNNING 1 onboot, >>>>>>>>> openstack 10.255.255.54, 172.29.239.124 - >>>>>>>>> [root at aio ~]# lxc-a >>>>>>>>> lxc-attach lxc-autostart >>>>>>>>> [root at aio ~]# lxc-attach -n aio1_utility_container-bd106f11 >>>>>>>>> [root at aio1-utility-container-bd106f11 ~]# >>>>>>>>> [root at aio1-utility-container-bd106f11 ~]# source /root/openrc >>>>>>>>> [root at aio1-utility-container-bd106f11 ~]# openstack >>>>>>>>> openstack openstack-host-hostfile-setup.sh >>>>>>>>> [root at aio1-utility-container-bd106f11 ~]# openstack >>>>>>>>> openstack openstack-host-hostfile-setup.sh >>>>>>>>> [root at aio1-utility-container-bd106f11 ~]# openstack user list >>>>>>>>> Failed to discover available identity versions when contacting >>>>>>>>> http://172.29.236.100:5000/v3. Attempting to parse version from URL. >>>>>>>>> Service Unavailable (HTTP 503) >>>>>>>>> [root at aio1-utility-container-bd106f11 ~]# >>>>>>>>> >>>>>>>>> >>>>>>>>> not sure what is this error ? >>>>>>>>> >>>>>>>>> >>>>>>>>> On Sat, Feb 3, 2018 at 7:29 PM, Satish Patel >>>>>>>>> wrote: >>>>>>>>>> I have tired everything but didn't able to find solution :( what i am >>>>>>>>>> doing wrong here, i am following this instruction and please let me >>>>>>>>>> know if i am wrong >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> https://developer.rackspace.com/blog/life-without-devstack-openstack-development-with-osa/ >>>>>>>>>> >>>>>>>>>> I have CentOS7, with 8 CPU and 16GB memory with 100GB disk size. >>>>>>>>>> >>>>>>>>>> Error: http://paste.openstack.org/show/660497/ >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> I have tired gate-check-commit.sh but same error :( >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On Sat, Feb 3, 2018 at 1:11 AM, Satish Patel >>>>>>>>>> wrote: >>>>>>>>>>> I have started playing with openstack-ansible on CentOS7 and trying to >>>>>>>>>>> install All-in-one but got this error and not sure what cause that >>>>>>>>>>> error how do i troubleshoot it? >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> TASK [bootstrap-host : Remove an existing private/public ssh keys if >>>>>>>>>>> one is missing] >>>>>>>>>>> >>>>>>>>>>> ************************************************************************ >>>>>>>>>>> skipping: [localhost] => (item=id_rsa) >>>>>>>>>>> skipping: [localhost] => (item=id_rsa.pub) >>>>>>>>>>> >>>>>>>>>>> TASK [bootstrap-host : Create ssh key pair for root] >>>>>>>>>>> >>>>>>>>>>> ******************************************************************************************************** >>>>>>>>>>> ok: [localhost] >>>>>>>>>>> >>>>>>>>>>> TASK [bootstrap-host : Fetch the generated public ssh key] >>>>>>>>>>> >>>>>>>>>>> ************************************************************************************************** >>>>>>>>>>> changed: [localhost] >>>>>>>>>>> >>>>>>>>>>> TASK [bootstrap-host : Ensure root's new public ssh key is in >>>>>>>>>>> authorized_keys] >>>>>>>>>>> >>>>>>>>>>> ****************************************************************************** >>>>>>>>>>> ok: [localhost] >>>>>>>>>>> >>>>>>>>>>> TASK [bootstrap-host : Create the required deployment directories] >>>>>>>>>>> >>>>>>>>>>> ****************************************************************************************** >>>>>>>>>>> changed: [localhost] => (item=/etc/openstack_deploy) >>>>>>>>>>> changed: [localhost] => (item=/etc/openstack_deploy/conf.d) >>>>>>>>>>> changed: [localhost] => (item=/etc/openstack_deploy/env.d) >>>>>>>>>>> >>>>>>>>>>> TASK [bootstrap-host : Deploy user conf.d configuration] >>>>>>>>>>> >>>>>>>>>>> **************************************************************************************************** >>>>>>>>>>> fatal: [localhost]: FAILED! => {"msg": "{{ >>>>>>>>>>> confd_overrides[bootstrap_host_scenario] }}: 'dict object' has no >>>>>>>>>>> attribute u'aio'"} >>>>>>>>>>> >>>>>>>>>>> RUNNING HANDLER [sshd : Reload the SSH service] >>>>>>>>>>> >>>>>>>>>>> ************************************************************************************************************* >>>>>>>>>>> to retry, use: --limit >>>>>>>>>>> @/opt/openstack-ansible/tests/bootstrap-aio.retry >>>>>>>>>>> >>>>>>>>>>> PLAY RECAP >>>>>>>>>>> ************************************************************************************************************************************************** >>>>>>>>>>> localhost : ok=61 changed=36 unreachable=0 >>>>>>>>>>> failed=2 >>>>>>>>>>> >>>>>>>>>>> [root at aio openstack-ansible]# >>>>>>>> >>>>>>>> _______________________________________________ >>>>>>>> Mailing list: >>>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >>>>>>>> Post to : openstack at lists.openstack.org >>>>>>>> Unsubscribe : >>>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >>>>>>> >>>>>>> >>>>>> >>>>>> _______________________________________________ >>>>>> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >>>>>> Post to : openstack at lists.openstack.org >>>>>> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack From marcin.dulak at gmail.com Mon Feb 5 08:34:13 2018 From: marcin.dulak at gmail.com (Marcin Dulak) Date: Mon, 5 Feb 2018 09:34:13 +0100 Subject: [Openstack] openstack-ansible aio error In-Reply-To: References: <2D5C1208-BA0A-491D-B232-6B0AB3EBC8CF@italy1.com> <1C807435-61CA-4A09-AD82-C6F3B91CEC03@italy1.com> <7CD76E10-3819-4C9E-BEA3-571F3FBACD6C@italy1.com> Message-ID: Please report a proper bug at https://bugs.launchpad.net/openstack-ansible/ - with the git hash you are using, and all steps to reproduce Marcin On Mon, Feb 5, 2018 at 5:53 AM, Satish Patel wrote: > Now i got this error when i am running following command, on my CentOS > i do have "libselinux-python" installed but still ansible saying it is > not installed. i have submit bug but lets see > > $openstack-ansible setup-hosts.yml > > > TASK [openstack_hosts : include] > ************************************************************ > ************************************************************ > **************************** > Sunday 04 February 2018 23:49:29 -0500 (0:00:00.481) 0:00:31.439 > ******* > included: /etc/ansible/roles/openstack_hosts/tasks/openstack_update_ > hosts_file.yml > for aio1 > > TASK [openstack_hosts : Drop hosts file entries script locally] > ************************************************************ > ********************************************************* > Sunday 04 February 2018 23:49:30 -0500 (0:00:00.074) 0:00:31.513 > ******* > fatal: [aio1 -> localhost]: FAILED! => {"changed": true, "failed": > true, "msg": "Aborting, target uses selinux but python bindings > (libselinux-python) aren't installed!"} > > NO MORE HOSTS LEFT > ************************************************************ > ************************************************************ > ****************************************** > > NO MORE HOSTS LEFT > ************************************************************ > ************************************************************ > ****************************************** > > PLAY RECAP ************************************************************ > ************************************************************ > ************************************************** > aio1 : ok=16 changed=7 unreachable=0 failed=1 > > Sunday 04 February 2018 23:49:41 -0500 (0:00:11.051) 0:00:42.565 > ******* > ============================================================ > =================== > openstack_hosts : Drop hosts file entries script locally --------------- > 11.05s > openstack_hosts : Install host packages -------------------------------- > 11.03s > openstack_hosts : Install EPEL, and yum priorities plugin --------------- > 6.41s > Install Python2 --------------------------------------------------------- > 5.87s > openstack_hosts : Download RDO repository RPM --------------------------- > 2.60s > openstack_hosts : Enable and set repo priorities ------------------------ > 1.83s > openstack_hosts : Install RDO repository and key ------------------------ > 0.80s > openstack_hosts : Disable requiretty for root sudo on centos ------------ > 0.69s > Gathering Facts --------------------------------------------------------- > 0.59s > openstack_hosts : Enable sysstat cron ----------------------------------- > 0.48s > openstack_hosts : Allow the usage of local facts ------------------------ > 0.27s > openstack_hosts : Disable yum fastestmirror plugin ---------------------- > 0.26s > openstack_hosts : Add global_environment_variables to environment file --- > 0.24s > Check for a supported Operating System ---------------------------------- > 0.07s > openstack_hosts : include ----------------------------------------------- > 0.07s > openstack_hosts : include ----------------------------------------------- > 0.07s > openstack_hosts : Gather variables for each operating system ------------ > 0.05s > apt_package_pinning : Add apt pin preferences --------------------------- > 0.03s > openstack_hosts : Remove conflicting distro packages -------------------- > 0.02s > openstack_hosts : Enable sysstat config --------------------------------- > 0.01s > [root at aio playbooks]# yum install libselinux-python > Loaded plugins: priorities > 243 packages excluded due to repository priority protections > Package libselinux-python-2.5-11.el7.x86_64 already installed and latest > version > Nothing to do > [root at aio playbooks]# > > On Sun, Feb 4, 2018 at 2:18 PM, wrote: > > Not sure about that tripleo is very complicated and ready for production > where ansible OpenStack is probably not. > > > > If you want to learn sure but let’s look at the facts production needs > something more than what ansible OpenStack can now offer. > > > >> Il giorno 04 feb 2018, alle ore 11:14, Satish Patel < > satish.txt at gmail.com> ha scritto: > >> > >> :) > >> > >> I am going to try openstack-ansible and and if i am lucky i will > >> continue and plan to deploy on production but if it will take too much > >> my time to debug then i would go with tripleO which seems less > >> complicated so far. > >> > >> As you said openstack-ansible has good ubuntu community and we are > >> 100% CentOS shop and i want something which we are comfortable and > >> supported by deployment tool. > >> > >> My first production cluster is 20 node but it may slowly grow if all > goes well. > >> > >>> On Sun, Feb 4, 2018 at 2:05 PM, wrote: > >>> Tripleo = ha > >>> Packstack = no ha > >>> > >>>> Il giorno 04 feb 2018, alle ore 11:00, Satish Patel < > satish.txt at gmail.com> ha scritto: > >>>> > >>>> Just wondering why did you say we can't do HA with TripleO? I thought > >>>> it does support HA. am i missing something here? > >>>> > >>>>> On Sun, Feb 4, 2018 at 11:21 AM, wrote: > >>>>> What are you looking for ha? Etc. Tripleo is the way to go for that > packstack if you want simple deployment but no ha of course. > >>>>> > >>>>>> Il giorno 04 feb 2018, alle ore 07:53, Satish Patel < > satish.txt at gmail.com> ha scritto: > >>>>>> > >>>>>> Hi Marcin, > >>>>>> > >>>>>> Thank you, i will try other link, also i am using CentOS7 but anyway > >>>>>> now question is does openstack-ansible ready for production > deployment > >>>>>> despite galera issues and bug? > >>>>>> > >>>>>> If i want to go on production should i wait or find other tools to > >>>>>> deploy on production? > >>>>>> > >>>>>>> On Sun, Feb 4, 2018 at 5:29 AM, Marcin Dulak < > marcin.dulak at gmail.com> wrote: > >>>>>>> When playing with openstack-ansible do it in a virtual setup (e.g. > nested > >>>>>>> virtualization with libvirt) so you can reproducibly bring up your > >>>>>>> environment from scratch. > >>>>>>> You will have to do it multiple times. > >>>>>>> > >>>>>>> https://developer.rackspace.com/blog/life-without- > devstack-openstack-development-with-osa/ > >>>>>>> is more than 2 years old. > >>>>>>> > >>>>>>> Try to follow > >>>>>>> https://docs.openstack.org/openstack-ansible/latest/ > contributor/quickstart-aio.html > >>>>>>> but git clone the latest state of the openstack-ansible repo. > >>>>>>> The above page has a link that can be used to submit bugs directly > to the > >>>>>>> openstack-ansible project at launchpad. > >>>>>>> In this way you may be able to cleanup/improve the documentation, > >>>>>>> and since your setup is the simplest possible one your bug reports > may get > >>>>>>> noticed and reproduced by the developers. > >>>>>>> What happens is that most people try openstack-ansible, don't > report bugs, > >>>>>>> or report the bugs without the information neccesary > >>>>>>> to reproduce them, and abandon the whole idea. > >>>>>>> > >>>>>>> Try to search > >>>>>>> https://bugs.launchpad.net/openstack-ansible/+bugs?field. > searchtext=galera > >>>>>>> for inspiration about what to do. > >>>>>>> Currently the galera setup in openstack-ansible, especially on > centos7 seems > >>>>>>> to be undergoing some critical changes. > >>>>>>> Enter the galera container: > >>>>>>> lxc-attach -n aio1_galera_container-4f488f6a > >>>>>>> look around it, check whether mysqld is running etc., try to > identify which > >>>>>>> ansible tasks failed and run them manually inside of the container. > >>>>>>> > >>>>>>> Marcin > >>>>>>> > >>>>>>> > >>>>>>>> On Sun, Feb 4, 2018 at 3:41 AM, Satish Patel < > satish.txt at gmail.com> wrote: > >>>>>>>> > >>>>>>>> I have noticed in output "aio1_galera_container" is failed, how > do i > >>>>>>>> fixed this kind of issue? > >>>>>>>> > >>>>>>>> > >>>>>>>> > >>>>>>>> PLAY RECAP > >>>>>>>> ************************************************************ > ************************************************************ > ************************************************** > >>>>>>>> aio1 : ok=41 changed=4 unreachable=0 > >>>>>>>> failed=0 > >>>>>>>> aio1_cinder_api_container-2af4dd01 : ok=0 changed=0 > >>>>>>>> unreachable=0 failed=0 > >>>>>>>> aio1_cinder_scheduler_container-454db1fb : ok=0 changed=0 > >>>>>>>> unreachable=0 failed=0 > >>>>>>>> aio1_designate_container-f7ea3f73 : ok=0 changed=0 > unreachable=0 > >>>>>>>> failed=0 > >>>>>>>> aio1_galera_container-4f488f6a : ok=32 changed=3 > unreachable=0 > >>>>>>>> failed=1 > >>>>>>>> > >>>>>>>>> On Sat, Feb 3, 2018 at 9:26 PM, Satish Patel < > satish.txt at gmail.com> wrote: > >>>>>>>>> I have re-install centos7 and give it a try and got this error > >>>>>>>>> > >>>>>>>>> DEBUG MESSAGE RECAP > >>>>>>>>> ************************************************************ > >>>>>>>>> DEBUG: [Load local packages] > >>>>>>>>> *************************************************** > >>>>>>>>> All items completed > >>>>>>>>> > >>>>>>>>> Saturday 03 February 2018 21:04:07 -0500 (0:00:04.175) > >>>>>>>>> 0:16:17.204 ***** > >>>>>>>>> > >>>>>>>>> ============================================================ > =================== > >>>>>>>>> repo_build : Create OpenStack-Ansible requirement wheels > -------------- > >>>>>>>>> 268.16s > >>>>>>>>> repo_build : Wait for the venvs builds to complete > -------------------- > >>>>>>>>> 110.30s > >>>>>>>>> repo_build : Install packages ------------------------------ > ------------ > >>>>>>>>> 68.26s > >>>>>>>>> repo_build : Clone git repositories asynchronously > --------------------- > >>>>>>>>> 59.85s > >>>>>>>>> pip_install : Install distro packages > ---------------------------------- > >>>>>>>>> 36.72s > >>>>>>>>> galera_client : Install galera distro packages > ------------------------- > >>>>>>>>> 33.21s > >>>>>>>>> haproxy_server : Create haproxy service config files > ------------------- > >>>>>>>>> 30.81s > >>>>>>>>> repo_build : Execute the venv build scripts asynchonously > -------------- > >>>>>>>>> 29.69s > >>>>>>>>> pip_install : Install distro packages > ---------------------------------- > >>>>>>>>> 23.56s > >>>>>>>>> repo_server : Install repo server packages > ----------------------------- > >>>>>>>>> 20.11s > >>>>>>>>> memcached_server : Install distro packages > ----------------------------- > >>>>>>>>> 16.35s > >>>>>>>>> repo_build : Create venv build options files > --------------------------- > >>>>>>>>> 14.57s > >>>>>>>>> haproxy_server : Install HAProxy Packages > >>>>>>>>> ------------------------------- 8.35s > >>>>>>>>> rsyslog_client : Install rsyslog packages > >>>>>>>>> ------------------------------- 8.33s > >>>>>>>>> rsyslog_client : Install rsyslog packages > >>>>>>>>> ------------------------------- 7.64s > >>>>>>>>> rsyslog_client : Install rsyslog packages > >>>>>>>>> ------------------------------- 7.42s > >>>>>>>>> repo_build : Wait for git clones to complete > >>>>>>>>> ---------------------------- 7.25s > >>>>>>>>> repo_server : Install repo caching server packages > >>>>>>>>> ---------------------- 4.76s > >>>>>>>>> galera_server : Check that WSREP is ready > >>>>>>>>> ------------------------------- 4.18s > >>>>>>>>> repo_server : Git service data folder setup > >>>>>>>>> ----------------------------- 4.04s > >>>>>>>>> ++ exit_fail 341 0 > >>>>>>>>> ++ set +x > >>>>>>>>> ++ info_block 'Error Info - 341' 0 > >>>>>>>>> ++ echo > >>>>>>>>> ------------------------------------------------------------ > ---------- > >>>>>>>>> ------------------------------------------------------------ > ---------- > >>>>>>>>> ++ print_info 'Error Info - 341' 0 > >>>>>>>>> ++ PROC_NAME='- [ Error Info - 341 0 ] -' > >>>>>>>>> ++ printf '\n%s%s\n' '- [ Error Info - 341 0 ] -' > >>>>>>>>> -------------------------------------------- > >>>>>>>>> > >>>>>>>>> - [ Error Info - 341 0 ] ------------------------------ > --------------- > >>>>>>>>> ++ echo > >>>>>>>>> ------------------------------------------------------------ > ---------- > >>>>>>>>> ------------------------------------------------------------ > ---------- > >>>>>>>>> ++ exit_state 1 > >>>>>>>>> ++ set +x > >>>>>>>>> ------------------------------------------------------------ > ---------- > >>>>>>>>> > >>>>>>>>> - [ Run Time = 2030 seconds || 33 minutes ] > -------------------------- > >>>>>>>>> ------------------------------------------------------------ > ---------- > >>>>>>>>> ------------------------------------------------------------ > ---------- > >>>>>>>>> > >>>>>>>>> - [ Status: Failure ] ------------------------------ > ------------------ > >>>>>>>>> ------------------------------------------------------------ > ---------- > >>>>>>>>> > >>>>>>>>> > >>>>>>>>> > >>>>>>>>> > >>>>>>>>> I don't know why it failed > >>>>>>>>> > >>>>>>>>> but i tried following: > >>>>>>>>> > >>>>>>>>> [root at aio ~]# lxc-ls -f > >>>>>>>>> NAME STATE AUTOSTART > GROUPS > >>>>>>>>> IPV4 IPV6 > >>>>>>>>> aio1_cinder_api_container-2af4dd01 RUNNING 1 > onboot, > >>>>>>>>> openstack 10.255.255.62, 172.29.238.210, 172.29.244.152 - > >>>>>>>>> aio1_cinder_scheduler_container-454db1fb RUNNING 1 > onboot, > >>>>>>>>> openstack 10.255.255.117, 172.29.239.172 - > >>>>>>>>> aio1_designate_container-f7ea3f73 RUNNING 1 > onboot, > >>>>>>>>> openstack 10.255.255.235, 172.29.239.166 - > >>>>>>>>> aio1_galera_container-4f488f6a RUNNING 1 > onboot, > >>>>>>>>> openstack 10.255.255.193, 172.29.236.69 - > >>>>>>>>> aio1_glance_container-f8caa9e6 RUNNING 1 > onboot, > >>>>>>>>> openstack 10.255.255.225, 172.29.239.52, 172.29.246.25 - > >>>>>>>>> aio1_heat_api_container-8321a763 RUNNING 1 > onboot, > >>>>>>>>> openstack 10.255.255.104, 172.29.236.186 - > >>>>>>>>> aio1_heat_apis_container-3f70ad74 RUNNING 1 > onboot, > >>>>>>>>> openstack 10.255.255.166, 172.29.239.13 - > >>>>>>>>> aio1_heat_engine_container-a18e5a0a RUNNING 1 > onboot, > >>>>>>>>> openstack 10.255.255.118, 172.29.238.7 - > >>>>>>>>> aio1_horizon_container-e493275c RUNNING 1 > onboot, > >>>>>>>>> openstack 10.255.255.98, 172.29.237.43 - > >>>>>>>>> aio1_keystone_container-c0e23e14 RUNNING 1 > onboot, > >>>>>>>>> openstack 10.255.255.60, 172.29.237.165 - > >>>>>>>>> aio1_memcached_container-ef8fed4c RUNNING 1 > onboot, > >>>>>>>>> openstack 10.255.255.214, 172.29.238.211 - > >>>>>>>>> aio1_neutron_agents_container-131e996e RUNNING 1 > onboot, > >>>>>>>>> openstack 10.255.255.153, 172.29.237.246, 172.29.243.227 - > >>>>>>>>> aio1_neutron_server_container-ccd69394 RUNNING 1 > onboot, > >>>>>>>>> openstack 10.255.255.27, 172.29.236.129 - > >>>>>>>>> aio1_nova_api_container-73274024 RUNNING 1 > onboot, > >>>>>>>>> openstack 10.255.255.42, 172.29.238.201 - > >>>>>>>>> aio1_nova_api_metadata_container-a1d32282 RUNNING 1 > onboot, > >>>>>>>>> openstack 10.255.255.218, 172.29.238.153 - > >>>>>>>>> aio1_nova_api_os_compute_container-52725940 RUNNING 1 > onboot, > >>>>>>>>> openstack 10.255.255.109, 172.29.236.126 - > >>>>>>>>> aio1_nova_api_placement_container-058e8031 RUNNING 1 > onboot, > >>>>>>>>> openstack 10.255.255.29, 172.29.236.157 - > >>>>>>>>> aio1_nova_conductor_container-9b6b208c RUNNING 1 > onboot, > >>>>>>>>> openstack 10.255.255.18, 172.29.239.9 - > >>>>>>>>> aio1_nova_console_container-0fb8995c RUNNING 1 > onboot, > >>>>>>>>> openstack 10.255.255.47, 172.29.237.129 - > >>>>>>>>> aio1_nova_scheduler_container-8f7a657a RUNNING 1 > onboot, > >>>>>>>>> openstack 10.255.255.195, 172.29.238.113 - > >>>>>>>>> aio1_rabbit_mq_container-c3450d66 RUNNING 1 > onboot, > >>>>>>>>> openstack 10.255.255.111, 172.29.237.202 - > >>>>>>>>> aio1_repo_container-8e07fdef RUNNING 1 > onboot, > >>>>>>>>> openstack 10.255.255.141, 172.29.239.79 - > >>>>>>>>> aio1_rsyslog_container-b198fbe5 RUNNING 1 > onboot, > >>>>>>>>> openstack 10.255.255.13, 172.29.236.195 - > >>>>>>>>> aio1_swift_proxy_container-1a3536e1 RUNNING 1 > onboot, > >>>>>>>>> openstack 10.255.255.108, 172.29.237.31, 172.29.244.248 - > >>>>>>>>> aio1_utility_container-bd106f11 RUNNING 1 > onboot, > >>>>>>>>> openstack 10.255.255.54, 172.29.239.124 - > >>>>>>>>> [root at aio ~]# lxc-a > >>>>>>>>> lxc-attach lxc-autostart > >>>>>>>>> [root at aio ~]# lxc-attach -n aio1_utility_container-bd106f11 > >>>>>>>>> [root at aio1-utility-container-bd106f11 ~]# > >>>>>>>>> [root at aio1-utility-container-bd106f11 ~]# source /root/openrc > >>>>>>>>> [root at aio1-utility-container-bd106f11 ~]# openstack > >>>>>>>>> openstack openstack-host-hostfile-setup. > sh > >>>>>>>>> [root at aio1-utility-container-bd106f11 ~]# openstack > >>>>>>>>> openstack openstack-host-hostfile-setup. > sh > >>>>>>>>> [root at aio1-utility-container-bd106f11 ~]# openstack user list > >>>>>>>>> Failed to discover available identity versions when contacting > >>>>>>>>> http://172.29.236.100:5000/v3. Attempting to parse version from > URL. > >>>>>>>>> Service Unavailable (HTTP 503) > >>>>>>>>> [root at aio1-utility-container-bd106f11 ~]# > >>>>>>>>> > >>>>>>>>> > >>>>>>>>> not sure what is this error ? > >>>>>>>>> > >>>>>>>>> > >>>>>>>>> On Sat, Feb 3, 2018 at 7:29 PM, Satish Patel < > satish.txt at gmail.com> > >>>>>>>>> wrote: > >>>>>>>>>> I have tired everything but didn't able to find solution :( > what i am > >>>>>>>>>> doing wrong here, i am following this instruction and please > let me > >>>>>>>>>> know if i am wrong > >>>>>>>>>> > >>>>>>>>>> > >>>>>>>>>> https://developer.rackspace.com/blog/life-without- > devstack-openstack-development-with-osa/ > >>>>>>>>>> > >>>>>>>>>> I have CentOS7, with 8 CPU and 16GB memory with 100GB disk size. > >>>>>>>>>> > >>>>>>>>>> Error: http://paste.openstack.org/show/660497/ > >>>>>>>>>> > >>>>>>>>>> > >>>>>>>>>> I have tired gate-check-commit.sh but same error :( > >>>>>>>>>> > >>>>>>>>>> > >>>>>>>>>> > >>>>>>>>>> On Sat, Feb 3, 2018 at 1:11 AM, Satish Patel < > satish.txt at gmail.com> > >>>>>>>>>> wrote: > >>>>>>>>>>> I have started playing with openstack-ansible on CentOS7 and > trying to > >>>>>>>>>>> install All-in-one but got this error and not sure what cause > that > >>>>>>>>>>> error how do i troubleshoot it? > >>>>>>>>>>> > >>>>>>>>>>> > >>>>>>>>>>> TASK [bootstrap-host : Remove an existing private/public ssh > keys if > >>>>>>>>>>> one is missing] > >>>>>>>>>>> > >>>>>>>>>>> ************************************************************ > ************ > >>>>>>>>>>> skipping: [localhost] => (item=id_rsa) > >>>>>>>>>>> skipping: [localhost] => (item=id_rsa.pub) > >>>>>>>>>>> > >>>>>>>>>>> TASK [bootstrap-host : Create ssh key pair for root] > >>>>>>>>>>> > >>>>>>>>>>> ************************************************************ > ******************************************** > >>>>>>>>>>> ok: [localhost] > >>>>>>>>>>> > >>>>>>>>>>> TASK [bootstrap-host : Fetch the generated public ssh key] > >>>>>>>>>>> > >>>>>>>>>>> ************************************************************ > ************************************** > >>>>>>>>>>> changed: [localhost] > >>>>>>>>>>> > >>>>>>>>>>> TASK [bootstrap-host : Ensure root's new public ssh key is in > >>>>>>>>>>> authorized_keys] > >>>>>>>>>>> > >>>>>>>>>>> ************************************************************ > ****************** > >>>>>>>>>>> ok: [localhost] > >>>>>>>>>>> > >>>>>>>>>>> TASK [bootstrap-host : Create the required deployment > directories] > >>>>>>>>>>> > >>>>>>>>>>> ************************************************************ > ****************************** > >>>>>>>>>>> changed: [localhost] => (item=/etc/openstack_deploy) > >>>>>>>>>>> changed: [localhost] => (item=/etc/openstack_deploy/conf.d) > >>>>>>>>>>> changed: [localhost] => (item=/etc/openstack_deploy/env.d) > >>>>>>>>>>> > >>>>>>>>>>> TASK [bootstrap-host : Deploy user conf.d configuration] > >>>>>>>>>>> > >>>>>>>>>>> ************************************************************ > **************************************** > >>>>>>>>>>> fatal: [localhost]: FAILED! => {"msg": "{{ > >>>>>>>>>>> confd_overrides[bootstrap_host_scenario] }}: 'dict object' > has no > >>>>>>>>>>> attribute u'aio'"} > >>>>>>>>>>> > >>>>>>>>>>> RUNNING HANDLER [sshd : Reload the SSH service] > >>>>>>>>>>> > >>>>>>>>>>> ************************************************************ > ************************************************* > >>>>>>>>>>> to retry, use: --limit > >>>>>>>>>>> @/opt/openstack-ansible/tests/bootstrap-aio.retry > >>>>>>>>>>> > >>>>>>>>>>> PLAY RECAP > >>>>>>>>>>> ************************************************************ > ************************************************************ > ************************** > >>>>>>>>>>> localhost : ok=61 changed=36 unreachable=0 > >>>>>>>>>>> failed=2 > >>>>>>>>>>> > >>>>>>>>>>> [root at aio openstack-ansible]# > >>>>>>>> > >>>>>>>> _______________________________________________ > >>>>>>>> Mailing list: > >>>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > >>>>>>>> Post to : openstack at lists.openstack.org > >>>>>>>> Unsubscribe : > >>>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > >>>>>>> > >>>>>>> > >>>>>> > >>>>>> _______________________________________________ > >>>>>> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/ > openstack > >>>>>> Post to : openstack at lists.openstack.org > >>>>>> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/ > openstack > -------------- next part -------------- An HTML attachment was scrubbed... URL: From marcin.dulak at gmail.com Mon Feb 5 08:36:38 2018 From: marcin.dulak at gmail.com (Marcin Dulak) Date: Mon, 5 Feb 2018 09:36:38 +0100 Subject: [Openstack] openstack-ansible aio error In-Reply-To: References: <2D5C1208-BA0A-491D-B232-6B0AB3EBC8CF@italy1.com> <1C807435-61CA-4A09-AD82-C6F3B91CEC03@italy1.com> <7CD76E10-3819-4C9E-BEA3-571F3FBACD6C@italy1.com> Message-ID: You need to provide a hash at https://bugs.launchpad.net/openstack-ansible/+bug/1747313 Has by chance your host selinux disabled? Marcin On Mon, Feb 5, 2018 at 9:34 AM, Marcin Dulak wrote: > Please report a proper bug at https://bugs.launchpad.net/ > openstack-ansible/ - with the git hash you are using, and all steps to > reproduce > > Marcin > > On Mon, Feb 5, 2018 at 5:53 AM, Satish Patel wrote: > >> Now i got this error when i am running following command, on my CentOS >> i do have "libselinux-python" installed but still ansible saying it is >> not installed. i have submit bug but lets see >> >> $openstack-ansible setup-hosts.yml >> >> >> TASK [openstack_hosts : include] >> ************************************************************ >> ************************************************************ >> **************************** >> Sunday 04 February 2018 23:49:29 -0500 (0:00:00.481) 0:00:31.439 >> ******* >> included: /etc/ansible/roles/openstack_hosts/tasks/openstack_update_ho >> sts_file.yml >> for aio1 >> >> TASK [openstack_hosts : Drop hosts file entries script locally] >> ************************************************************ >> ********************************************************* >> Sunday 04 February 2018 23:49:30 -0500 (0:00:00.074) 0:00:31.513 >> ******* >> fatal: [aio1 -> localhost]: FAILED! => {"changed": true, "failed": >> true, "msg": "Aborting, target uses selinux but python bindings >> (libselinux-python) aren't installed!"} >> >> NO MORE HOSTS LEFT >> ************************************************************ >> ************************************************************ >> ****************************************** >> >> NO MORE HOSTS LEFT >> ************************************************************ >> ************************************************************ >> ****************************************** >> >> PLAY RECAP ************************************************************ >> ************************************************************ >> ************************************************** >> aio1 : ok=16 changed=7 unreachable=0 >> failed=1 >> >> Sunday 04 February 2018 23:49:41 -0500 (0:00:11.051) 0:00:42.565 >> ******* >> ============================================================ >> =================== >> openstack_hosts : Drop hosts file entries script locally --------------- >> 11.05s >> openstack_hosts : Install host packages -------------------------------- >> 11.03s >> openstack_hosts : Install EPEL, and yum priorities plugin --------------- >> 6.41s >> Install Python2 --------------------------------------------------------- >> 5.87s >> openstack_hosts : Download RDO repository RPM --------------------------- >> 2.60s >> openstack_hosts : Enable and set repo priorities ------------------------ >> 1.83s >> openstack_hosts : Install RDO repository and key ------------------------ >> 0.80s >> openstack_hosts : Disable requiretty for root sudo on centos ------------ >> 0.69s >> Gathering Facts --------------------------------------------------------- >> 0.59s >> openstack_hosts : Enable sysstat cron ----------------------------------- >> 0.48s >> openstack_hosts : Allow the usage of local facts ------------------------ >> 0.27s >> openstack_hosts : Disable yum fastestmirror plugin ---------------------- >> 0.26s >> openstack_hosts : Add global_environment_variables to environment file >> --- 0.24s >> Check for a supported Operating System ---------------------------------- >> 0.07s >> openstack_hosts : include ----------------------------------------------- >> 0.07s >> openstack_hosts : include ----------------------------------------------- >> 0.07s >> openstack_hosts : Gather variables for each operating system ------------ >> 0.05s >> apt_package_pinning : Add apt pin preferences --------------------------- >> 0.03s >> openstack_hosts : Remove conflicting distro packages -------------------- >> 0.02s >> openstack_hosts : Enable sysstat config --------------------------------- >> 0.01s >> [root at aio playbooks]# yum install libselinux-python >> Loaded plugins: priorities >> 243 packages excluded due to repository priority protections >> Package libselinux-python-2.5-11.el7.x86_64 already installed and latest >> version >> Nothing to do >> [root at aio playbooks]# >> >> On Sun, Feb 4, 2018 at 2:18 PM, wrote: >> > Not sure about that tripleo is very complicated and ready for >> production where ansible OpenStack is probably not. >> > >> > If you want to learn sure but let’s look at the facts production needs >> something more than what ansible OpenStack can now offer. >> > >> >> Il giorno 04 feb 2018, alle ore 11:14, Satish Patel < >> satish.txt at gmail.com> ha scritto: >> >> >> >> :) >> >> >> >> I am going to try openstack-ansible and and if i am lucky i will >> >> continue and plan to deploy on production but if it will take too much >> >> my time to debug then i would go with tripleO which seems less >> >> complicated so far. >> >> >> >> As you said openstack-ansible has good ubuntu community and we are >> >> 100% CentOS shop and i want something which we are comfortable and >> >> supported by deployment tool. >> >> >> >> My first production cluster is 20 node but it may slowly grow if all >> goes well. >> >> >> >>> On Sun, Feb 4, 2018 at 2:05 PM, wrote: >> >>> Tripleo = ha >> >>> Packstack = no ha >> >>> >> >>>> Il giorno 04 feb 2018, alle ore 11:00, Satish Patel < >> satish.txt at gmail.com> ha scritto: >> >>>> >> >>>> Just wondering why did you say we can't do HA with TripleO? I >> thought >> >>>> it does support HA. am i missing something here? >> >>>> >> >>>>> On Sun, Feb 4, 2018 at 11:21 AM, wrote: >> >>>>> What are you looking for ha? Etc. Tripleo is the way to go for that >> packstack if you want simple deployment but no ha of course. >> >>>>> >> >>>>>> Il giorno 04 feb 2018, alle ore 07:53, Satish Patel < >> satish.txt at gmail.com> ha scritto: >> >>>>>> >> >>>>>> Hi Marcin, >> >>>>>> >> >>>>>> Thank you, i will try other link, also i am using CentOS7 but >> anyway >> >>>>>> now question is does openstack-ansible ready for production >> deployment >> >>>>>> despite galera issues and bug? >> >>>>>> >> >>>>>> If i want to go on production should i wait or find other tools to >> >>>>>> deploy on production? >> >>>>>> >> >>>>>>> On Sun, Feb 4, 2018 at 5:29 AM, Marcin Dulak < >> marcin.dulak at gmail.com> wrote: >> >>>>>>> When playing with openstack-ansible do it in a virtual setup >> (e.g. nested >> >>>>>>> virtualization with libvirt) so you can reproducibly bring up your >> >>>>>>> environment from scratch. >> >>>>>>> You will have to do it multiple times. >> >>>>>>> >> >>>>>>> https://developer.rackspace.com/blog/life-without-devstack- >> openstack-development-with-osa/ >> >>>>>>> is more than 2 years old. >> >>>>>>> >> >>>>>>> Try to follow >> >>>>>>> https://docs.openstack.org/openstack-ansible/latest/contribu >> tor/quickstart-aio.html >> >>>>>>> but git clone the latest state of the openstack-ansible repo. >> >>>>>>> The above page has a link that can be used to submit bugs >> directly to the >> >>>>>>> openstack-ansible project at launchpad. >> >>>>>>> In this way you may be able to cleanup/improve the documentation, >> >>>>>>> and since your setup is the simplest possible one your bug >> reports may get >> >>>>>>> noticed and reproduced by the developers. >> >>>>>>> What happens is that most people try openstack-ansible, don't >> report bugs, >> >>>>>>> or report the bugs without the information neccesary >> >>>>>>> to reproduce them, and abandon the whole idea. >> >>>>>>> >> >>>>>>> Try to search >> >>>>>>> https://bugs.launchpad.net/openstack-ansible/+bugs?field.sea >> rchtext=galera >> >>>>>>> for inspiration about what to do. >> >>>>>>> Currently the galera setup in openstack-ansible, especially on >> centos7 seems >> >>>>>>> to be undergoing some critical changes. >> >>>>>>> Enter the galera container: >> >>>>>>> lxc-attach -n aio1_galera_container-4f488f6a >> >>>>>>> look around it, check whether mysqld is running etc., try to >> identify which >> >>>>>>> ansible tasks failed and run them manually inside of the >> container. >> >>>>>>> >> >>>>>>> Marcin >> >>>>>>> >> >>>>>>> >> >>>>>>>> On Sun, Feb 4, 2018 at 3:41 AM, Satish Patel < >> satish.txt at gmail.com> wrote: >> >>>>>>>> >> >>>>>>>> I have noticed in output "aio1_galera_container" is failed, how >> do i >> >>>>>>>> fixed this kind of issue? >> >>>>>>>> >> >>>>>>>> >> >>>>>>>> >> >>>>>>>> PLAY RECAP >> >>>>>>>> ************************************************************ >> ************************************************************ >> ************************************************** >> >>>>>>>> aio1 : ok=41 changed=4 unreachable=0 >> >>>>>>>> failed=0 >> >>>>>>>> aio1_cinder_api_container-2af4dd01 : ok=0 changed=0 >> >>>>>>>> unreachable=0 failed=0 >> >>>>>>>> aio1_cinder_scheduler_container-454db1fb : ok=0 changed=0 >> >>>>>>>> unreachable=0 failed=0 >> >>>>>>>> aio1_designate_container-f7ea3f73 : ok=0 changed=0 >> unreachable=0 >> >>>>>>>> failed=0 >> >>>>>>>> aio1_galera_container-4f488f6a : ok=32 changed=3 >> unreachable=0 >> >>>>>>>> failed=1 >> >>>>>>>> >> >>>>>>>>> On Sat, Feb 3, 2018 at 9:26 PM, Satish Patel < >> satish.txt at gmail.com> wrote: >> >>>>>>>>> I have re-install centos7 and give it a try and got this error >> >>>>>>>>> >> >>>>>>>>> DEBUG MESSAGE RECAP >> >>>>>>>>> ************************************************************ >> >>>>>>>>> DEBUG: [Load local packages] >> >>>>>>>>> *************************************************** >> >>>>>>>>> All items completed >> >>>>>>>>> >> >>>>>>>>> Saturday 03 February 2018 21:04:07 -0500 (0:00:04.175) >> >>>>>>>>> 0:16:17.204 ***** >> >>>>>>>>> >> >>>>>>>>> ============================================================ >> =================== >> >>>>>>>>> repo_build : Create OpenStack-Ansible requirement wheels >> -------------- >> >>>>>>>>> 268.16s >> >>>>>>>>> repo_build : Wait for the venvs builds to complete >> -------------------- >> >>>>>>>>> 110.30s >> >>>>>>>>> repo_build : Install packages ------------------------------ >> ------------ >> >>>>>>>>> 68.26s >> >>>>>>>>> repo_build : Clone git repositories asynchronously >> --------------------- >> >>>>>>>>> 59.85s >> >>>>>>>>> pip_install : Install distro packages >> ---------------------------------- >> >>>>>>>>> 36.72s >> >>>>>>>>> galera_client : Install galera distro packages >> ------------------------- >> >>>>>>>>> 33.21s >> >>>>>>>>> haproxy_server : Create haproxy service config files >> ------------------- >> >>>>>>>>> 30.81s >> >>>>>>>>> repo_build : Execute the venv build scripts asynchonously >> -------------- >> >>>>>>>>> 29.69s >> >>>>>>>>> pip_install : Install distro packages >> ---------------------------------- >> >>>>>>>>> 23.56s >> >>>>>>>>> repo_server : Install repo server packages >> ----------------------------- >> >>>>>>>>> 20.11s >> >>>>>>>>> memcached_server : Install distro packages >> ----------------------------- >> >>>>>>>>> 16.35s >> >>>>>>>>> repo_build : Create venv build options files >> --------------------------- >> >>>>>>>>> 14.57s >> >>>>>>>>> haproxy_server : Install HAProxy Packages >> >>>>>>>>> ------------------------------- 8.35s >> >>>>>>>>> rsyslog_client : Install rsyslog packages >> >>>>>>>>> ------------------------------- 8.33s >> >>>>>>>>> rsyslog_client : Install rsyslog packages >> >>>>>>>>> ------------------------------- 7.64s >> >>>>>>>>> rsyslog_client : Install rsyslog packages >> >>>>>>>>> ------------------------------- 7.42s >> >>>>>>>>> repo_build : Wait for git clones to complete >> >>>>>>>>> ---------------------------- 7.25s >> >>>>>>>>> repo_server : Install repo caching server packages >> >>>>>>>>> ---------------------- 4.76s >> >>>>>>>>> galera_server : Check that WSREP is ready >> >>>>>>>>> ------------------------------- 4.18s >> >>>>>>>>> repo_server : Git service data folder setup >> >>>>>>>>> ----------------------------- 4.04s >> >>>>>>>>> ++ exit_fail 341 0 >> >>>>>>>>> ++ set +x >> >>>>>>>>> ++ info_block 'Error Info - 341' 0 >> >>>>>>>>> ++ echo >> >>>>>>>>> ------------------------------------------------------------ >> ---------- >> >>>>>>>>> ------------------------------------------------------------ >> ---------- >> >>>>>>>>> ++ print_info 'Error Info - 341' 0 >> >>>>>>>>> ++ PROC_NAME='- [ Error Info - 341 0 ] -' >> >>>>>>>>> ++ printf '\n%s%s\n' '- [ Error Info - 341 0 ] -' >> >>>>>>>>> -------------------------------------------- >> >>>>>>>>> >> >>>>>>>>> - [ Error Info - 341 0 ] ------------------------------ >> --------------- >> >>>>>>>>> ++ echo >> >>>>>>>>> ------------------------------------------------------------ >> ---------- >> >>>>>>>>> ------------------------------------------------------------ >> ---------- >> >>>>>>>>> ++ exit_state 1 >> >>>>>>>>> ++ set +x >> >>>>>>>>> ------------------------------------------------------------ >> ---------- >> >>>>>>>>> >> >>>>>>>>> - [ Run Time = 2030 seconds || 33 minutes ] >> -------------------------- >> >>>>>>>>> ------------------------------------------------------------ >> ---------- >> >>>>>>>>> ------------------------------------------------------------ >> ---------- >> >>>>>>>>> >> >>>>>>>>> - [ Status: Failure ] ------------------------------ >> ------------------ >> >>>>>>>>> ------------------------------------------------------------ >> ---------- >> >>>>>>>>> >> >>>>>>>>> >> >>>>>>>>> >> >>>>>>>>> >> >>>>>>>>> I don't know why it failed >> >>>>>>>>> >> >>>>>>>>> but i tried following: >> >>>>>>>>> >> >>>>>>>>> [root at aio ~]# lxc-ls -f >> >>>>>>>>> NAME STATE AUTOSTART >> GROUPS >> >>>>>>>>> IPV4 IPV6 >> >>>>>>>>> aio1_cinder_api_container-2af4dd01 RUNNING 1 >> onboot, >> >>>>>>>>> openstack 10.255.255.62, 172.29.238.210, 172.29.244.152 - >> >>>>>>>>> aio1_cinder_scheduler_container-454db1fb RUNNING 1 >> onboot, >> >>>>>>>>> openstack 10.255.255.117, 172.29.239.172 - >> >>>>>>>>> aio1_designate_container-f7ea3f73 RUNNING 1 >> onboot, >> >>>>>>>>> openstack 10.255.255.235, 172.29.239.166 - >> >>>>>>>>> aio1_galera_container-4f488f6a RUNNING 1 >> onboot, >> >>>>>>>>> openstack 10.255.255.193, 172.29.236.69 - >> >>>>>>>>> aio1_glance_container-f8caa9e6 RUNNING 1 >> onboot, >> >>>>>>>>> openstack 10.255.255.225, 172.29.239.52, 172.29.246.25 - >> >>>>>>>>> aio1_heat_api_container-8321a763 RUNNING 1 >> onboot, >> >>>>>>>>> openstack 10.255.255.104, 172.29.236.186 - >> >>>>>>>>> aio1_heat_apis_container-3f70ad74 RUNNING 1 >> onboot, >> >>>>>>>>> openstack 10.255.255.166, 172.29.239.13 - >> >>>>>>>>> aio1_heat_engine_container-a18e5a0a RUNNING 1 >> onboot, >> >>>>>>>>> openstack 10.255.255.118, 172.29.238.7 - >> >>>>>>>>> aio1_horizon_container-e493275c RUNNING 1 >> onboot, >> >>>>>>>>> openstack 10.255.255.98, 172.29.237.43 - >> >>>>>>>>> aio1_keystone_container-c0e23e14 RUNNING 1 >> onboot, >> >>>>>>>>> openstack 10.255.255.60, 172.29.237.165 - >> >>>>>>>>> aio1_memcached_container-ef8fed4c RUNNING 1 >> onboot, >> >>>>>>>>> openstack 10.255.255.214, 172.29.238.211 - >> >>>>>>>>> aio1_neutron_agents_container-131e996e RUNNING 1 >> onboot, >> >>>>>>>>> openstack 10.255.255.153, 172.29.237.246, 172.29.243.227 - >> >>>>>>>>> aio1_neutron_server_container-ccd69394 RUNNING 1 >> onboot, >> >>>>>>>>> openstack 10.255.255.27, 172.29.236.129 - >> >>>>>>>>> aio1_nova_api_container-73274024 RUNNING 1 >> onboot, >> >>>>>>>>> openstack 10.255.255.42, 172.29.238.201 - >> >>>>>>>>> aio1_nova_api_metadata_container-a1d32282 RUNNING 1 >> onboot, >> >>>>>>>>> openstack 10.255.255.218, 172.29.238.153 - >> >>>>>>>>> aio1_nova_api_os_compute_container-52725940 RUNNING 1 >> onboot, >> >>>>>>>>> openstack 10.255.255.109, 172.29.236.126 - >> >>>>>>>>> aio1_nova_api_placement_container-058e8031 RUNNING 1 >> onboot, >> >>>>>>>>> openstack 10.255.255.29, 172.29.236.157 - >> >>>>>>>>> aio1_nova_conductor_container-9b6b208c RUNNING 1 >> onboot, >> >>>>>>>>> openstack 10.255.255.18, 172.29.239.9 - >> >>>>>>>>> aio1_nova_console_container-0fb8995c RUNNING 1 >> onboot, >> >>>>>>>>> openstack 10.255.255.47, 172.29.237.129 - >> >>>>>>>>> aio1_nova_scheduler_container-8f7a657a RUNNING 1 >> onboot, >> >>>>>>>>> openstack 10.255.255.195, 172.29.238.113 - >> >>>>>>>>> aio1_rabbit_mq_container-c3450d66 RUNNING 1 >> onboot, >> >>>>>>>>> openstack 10.255.255.111, 172.29.237.202 - >> >>>>>>>>> aio1_repo_container-8e07fdef RUNNING 1 >> onboot, >> >>>>>>>>> openstack 10.255.255.141, 172.29.239.79 - >> >>>>>>>>> aio1_rsyslog_container-b198fbe5 RUNNING 1 >> onboot, >> >>>>>>>>> openstack 10.255.255.13, 172.29.236.195 - >> >>>>>>>>> aio1_swift_proxy_container-1a3536e1 RUNNING 1 >> onboot, >> >>>>>>>>> openstack 10.255.255.108, 172.29.237.31, 172.29.244.248 - >> >>>>>>>>> aio1_utility_container-bd106f11 RUNNING 1 >> onboot, >> >>>>>>>>> openstack 10.255.255.54, 172.29.239.124 - >> >>>>>>>>> [root at aio ~]# lxc-a >> >>>>>>>>> lxc-attach lxc-autostart >> >>>>>>>>> [root at aio ~]# lxc-attach -n aio1_utility_container-bd106f11 >> >>>>>>>>> [root at aio1-utility-container-bd106f11 ~]# >> >>>>>>>>> [root at aio1-utility-container-bd106f11 ~]# source /root/openrc >> >>>>>>>>> [root at aio1-utility-container-bd106f11 ~]# openstack >> >>>>>>>>> openstack openstack-host-hostfile- >> setup.sh >> >>>>>>>>> [root at aio1-utility-container-bd106f11 ~]# openstack >> >>>>>>>>> openstack openstack-host-hostfile- >> setup.sh >> >>>>>>>>> [root at aio1-utility-container-bd106f11 ~]# openstack user list >> >>>>>>>>> Failed to discover available identity versions when contacting >> >>>>>>>>> http://172.29.236.100:5000/v3. Attempting to parse version >> from URL. >> >>>>>>>>> Service Unavailable (HTTP 503) >> >>>>>>>>> [root at aio1-utility-container-bd106f11 ~]# >> >>>>>>>>> >> >>>>>>>>> >> >>>>>>>>> not sure what is this error ? >> >>>>>>>>> >> >>>>>>>>> >> >>>>>>>>> On Sat, Feb 3, 2018 at 7:29 PM, Satish Patel < >> satish.txt at gmail.com> >> >>>>>>>>> wrote: >> >>>>>>>>>> I have tired everything but didn't able to find solution :( >> what i am >> >>>>>>>>>> doing wrong here, i am following this instruction and please >> let me >> >>>>>>>>>> know if i am wrong >> >>>>>>>>>> >> >>>>>>>>>> >> >>>>>>>>>> https://developer.rackspace.com/blog/life-without-devstack- >> openstack-development-with-osa/ >> >>>>>>>>>> >> >>>>>>>>>> I have CentOS7, with 8 CPU and 16GB memory with 100GB disk >> size. >> >>>>>>>>>> >> >>>>>>>>>> Error: http://paste.openstack.org/show/660497/ >> >>>>>>>>>> >> >>>>>>>>>> >> >>>>>>>>>> I have tired gate-check-commit.sh but same error :( >> >>>>>>>>>> >> >>>>>>>>>> >> >>>>>>>>>> >> >>>>>>>>>> On Sat, Feb 3, 2018 at 1:11 AM, Satish Patel < >> satish.txt at gmail.com> >> >>>>>>>>>> wrote: >> >>>>>>>>>>> I have started playing with openstack-ansible on CentOS7 and >> trying to >> >>>>>>>>>>> install All-in-one but got this error and not sure what cause >> that >> >>>>>>>>>>> error how do i troubleshoot it? >> >>>>>>>>>>> >> >>>>>>>>>>> >> >>>>>>>>>>> TASK [bootstrap-host : Remove an existing private/public ssh >> keys if >> >>>>>>>>>>> one is missing] >> >>>>>>>>>>> >> >>>>>>>>>>> ************************************************************ >> ************ >> >>>>>>>>>>> skipping: [localhost] => (item=id_rsa) >> >>>>>>>>>>> skipping: [localhost] => (item=id_rsa.pub) >> >>>>>>>>>>> >> >>>>>>>>>>> TASK [bootstrap-host : Create ssh key pair for root] >> >>>>>>>>>>> >> >>>>>>>>>>> ************************************************************ >> ******************************************** >> >>>>>>>>>>> ok: [localhost] >> >>>>>>>>>>> >> >>>>>>>>>>> TASK [bootstrap-host : Fetch the generated public ssh key] >> >>>>>>>>>>> >> >>>>>>>>>>> ************************************************************ >> ************************************** >> >>>>>>>>>>> changed: [localhost] >> >>>>>>>>>>> >> >>>>>>>>>>> TASK [bootstrap-host : Ensure root's new public ssh key is in >> >>>>>>>>>>> authorized_keys] >> >>>>>>>>>>> >> >>>>>>>>>>> ************************************************************ >> ****************** >> >>>>>>>>>>> ok: [localhost] >> >>>>>>>>>>> >> >>>>>>>>>>> TASK [bootstrap-host : Create the required deployment >> directories] >> >>>>>>>>>>> >> >>>>>>>>>>> ************************************************************ >> ****************************** >> >>>>>>>>>>> changed: [localhost] => (item=/etc/openstack_deploy) >> >>>>>>>>>>> changed: [localhost] => (item=/etc/openstack_deploy/conf.d) >> >>>>>>>>>>> changed: [localhost] => (item=/etc/openstack_deploy/env.d) >> >>>>>>>>>>> >> >>>>>>>>>>> TASK [bootstrap-host : Deploy user conf.d configuration] >> >>>>>>>>>>> >> >>>>>>>>>>> ************************************************************ >> **************************************** >> >>>>>>>>>>> fatal: [localhost]: FAILED! => {"msg": "{{ >> >>>>>>>>>>> confd_overrides[bootstrap_host_scenario] }}: 'dict object' >> has no >> >>>>>>>>>>> attribute u'aio'"} >> >>>>>>>>>>> >> >>>>>>>>>>> RUNNING HANDLER [sshd : Reload the SSH service] >> >>>>>>>>>>> >> >>>>>>>>>>> ************************************************************ >> ************************************************* >> >>>>>>>>>>> to retry, use: --limit >> >>>>>>>>>>> @/opt/openstack-ansible/tests/bootstrap-aio.retry >> >>>>>>>>>>> >> >>>>>>>>>>> PLAY RECAP >> >>>>>>>>>>> ************************************************************ >> ************************************************************ >> ************************** >> >>>>>>>>>>> localhost : ok=61 changed=36 >> unreachable=0 >> >>>>>>>>>>> failed=2 >> >>>>>>>>>>> >> >>>>>>>>>>> [root at aio openstack-ansible]# >> >>>>>>>> >> >>>>>>>> _______________________________________________ >> >>>>>>>> Mailing list: >> >>>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >> >>>>>>>> Post to : openstack at lists.openstack.org >> >>>>>>>> Unsubscribe : >> >>>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >> >>>>>>> >> >>>>>>> >> >>>>>> >> >>>>>> _______________________________________________ >> >>>>>> Mailing list: http://lists.openstack.org/cgi >> -bin/mailman/listinfo/openstack >> >>>>>> Post to : openstack at lists.openstack.org >> >>>>>> Unsubscribe : http://lists.openstack.org/cgi >> -bin/mailman/listinfo/openstack >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From giorgis at acmac.uoc.gr Mon Feb 5 20:24:17 2018 From: giorgis at acmac.uoc.gr (Georgios Dimitrakakis) Date: Mon, 05 Feb 2018 22:24:17 +0200 Subject: [Openstack] Ocata Created Ports Strange Issue Message-ID: <94d7c8d202eeb6516006036aba889faa@acmac.uoc.gr> Dear all, I have a small Ocata installation (1x controller + 2x compute nodes) on which I have manually created 5 network ports and afterwards each one of these ports is assigned to a specific instance (4Linux VMs and 1Windows). All these instances are located on one physical hypervisor (compute node) while the controller is also the networking node. The other day we had to do system maintenance and all hosts (compute and network/controller) were powered off but before that we gracefully shutoff all running VMs. As soon as maintenance finished we powered on everything and I met the following strange issue... Instances with an attached port were trying for very long time to get an IP from the DHCP server but they all manage to get one eventually with the exception of the Windows VM on which I had to assign it statically. Restarting networking services on controller/network and/or compute node didn't make any difference. On the other hand all newly spawned instances didn't have any problem no matter on which compute node they were spawned and their only difference was that they were automatically getting ports assigned. All the above happened on Friday and today (Monday) people were complaining that the Linux VMs didn't have network connectivity (Windows was working...), so I don't know the exact time the issue occured. I have tried to access all VMs using the "self-service" network by spawning a new instance unfortunately without success. The instance was successfully spawned, it had network connectivity but couldn't reach any of the afforementioned VMs. What I did finally and solved the problem was to detach interfaces, deleted ports, re-created new ports with same IP address etc. and re-attached them to the VMs. As soon as I did that networking connectivity was back to normal without even having to restart the VMs. Unfortunately I couldn't find any helpful information regarding this in the logs and I am wondering has anyone seen or experienced something similar? Best regards, G. From martialmichel at datamachines.io Tue Feb 6 01:01:50 2018 From: martialmichel at datamachines.io (Martial Michel) Date: Mon, 5 Feb 2018 20:01:50 -0500 Subject: [Openstack] [Scientific] IRC meeting today Message-ID: Hello All - We will have our IRC meeting in the #openstack-meeting channel at 2100 UTC today. All are welcome. Agenda is at: https://wiki.openstack.org/wiki/Scientific_SIG#IRC_Meeting_February_6th_2018 See you there -- Martial -------------- next part -------------- An HTML attachment was scrubbed... URL: From martialmichel at datamachines.io Tue Feb 6 01:13:06 2018 From: martialmichel at datamachines.io (Martial Michel) Date: Mon, 5 Feb 2018 20:13:06 -0500 Subject: [Openstack] [Scientific] IRC meeting today: Call for participation re: Meltdown/Spectre performance impacts: does anyone have information to share Message-ID: Good Day, As part of the agenda tomorrow, we intend to invite our community members to share information (if any) about the benchmarked effects of Meltdown/Spectre on performance under OpenStack. If you have information and are able to share, we would appreciate you sharing it with us, either during our meeting or using this email thread and we will share it during the meeting. Thank you for your help -- Martial On Mon, Feb 5, 2018 at 8:01 PM, Martial Michel < martialmichel at datamachines.io> wrote: > Hello All - > > We will have our IRC meeting in the #openstack-meeting channel at 2100 UTC > today. All are welcome. > > Agenda is at: https://wiki.openstack.org/wiki/Scientific_SIG#IRC_ > Meeting_February_6th_2018 > > See you there -- Martial > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mahati.chamarthy at gmail.com Tue Feb 6 10:03:42 2018 From: mahati.chamarthy at gmail.com (Mahati C) Date: Tue, 6 Feb 2018 15:33:42 +0530 Subject: [Openstack] Call for mentors and funding - Outreachy May to Aug 2018 internships Message-ID: Hello everyone, We have an update on the Outreachy program, including a request for volunteer mentors and funding. For those of you who are not aware, Outreachy helps people from underrepresented groups get involved in free and open source software by matching interns with established mentors in the upstream community. For more info, please visit: https://wiki.openstack.org/wiki/Outreachy OpenStack is participating in the Outreachy May 2018 to August 2018 internships. The application period opens on February 12th. As the OpenStack PTG is around the corner, I understand many of you might be busy preparing for that. But putting in your project idea as soon as possible will help prospective interns to start working on their application. Plus, it's now a requirement to have at least one project idea submitted on the Outreachy website for OpenStack to show up under the current internship round. Interested mentors - please publish your project ideas on this page https://www.outreachy.org/communities/cfp/openstack/submit-project/. Here is a link that helps you get acquainted with mentorship process: https://wiki.openstack.org/wiki/Outreachy/Mentors We are also looking for additional sponsors to help support the increase in OpenStack applicants. The sponsorship cost is 6,500 USD per intern, which is used to provide them a stipend for the three-month program. You can learn more about sponsorship here: https://www.outreachy.org/sponsor/ . Outreachy has been one of the most important and effective diversity efforts we’ve invested in. We have had many interns turn into long term OpenStack contributors. Please help spread the word. If you are interested in becoming a mentor or sponsoring an intern, please contact me (mahati.chamarthy AT intel.com) or Victoria (victoria AT redhat.com). Thank you! Best, Mahati -------------- next part -------------- An HTML attachment was scrubbed... URL: From huanglingyan2 at huawei.com Wed Feb 7 01:45:24 2018 From: huanglingyan2 at huawei.com (huanglingyan (A)) Date: Wed, 7 Feb 2018 01:45:24 +0000 Subject: [Openstack] =?gb2312?b?ob5vc3Byb2ZpbGVyob90cmFjZSB3aXRoIFVVSUQg?= =?gb2312?b?bm90IGZvdW5k?= Message-ID: <4FBE811EBF224C46A4D67DE05FBC6AD301309ED2@dggeml510-mbx.china.huawei.com> There's always an error "Trace with UUID *** not found. Please check the HMAC key used int the command." when I use osprofiler with any openstack command. I install osprofiler in devstack with the following commands in local.conf. enable_plugin ceilometer http://git.openstack.org/openstack/ceilometer.git stable/pike enable_plugin osprofiler http://git.openstack.org/openstack/osprofiler stable/pike enable_plugin gnocchi https://github.com/gnocchixyz/gnocchi.git master enable_service gnocchi-api gnocchi-metricd Then check the /etc/*.conf like nova, ceilometer and many modules. Following several sentences are existed. [profiler] connection_string = messaging:// hmac_keys = SECRET_KEY trace_sqlalchemy = True enabled = True However, osprofiler cannot run properly. For example, stack at ubuntu:~/osprofiler/osprofiler/drivers$ openstack --os-profile hmac-key=SECTRE_KEY server list +--------------------------------------+---------------+--------+-------------------------------------------------------+--------------------------+---------+ | ID | Name | Status | Networks | Image | Flavor | +--------------------------------------+---------------+--------+-------------------------------------------------------+--------------------------+---------+ | 5d6beaf0-f3cc-485a-b610-c81c730dfc46 | cirros_server | ACTIVE | private=10.0.0.7, fd1e:b842:338:0:f816:3eff:fe39:b43c | cirros-0.3.5-x86_64-disk | m1.tiny | | 0102b739-1fa8-492b-83a8-cb915e6e9652 | ins_1 | ACTIVE | public=172.24.4.10, 2001:db8::5 | cirros-0.3.5-x86_64-disk | m1.tiny | +--------------------------------------+---------------+--------+-------------------------------------------------------+--------------------------+---------+ Trace ID: c7d15b08-31d8-4b59-930a-0b0ecd1e95edDisplay trace with command: osprofiler trace show --html c7d15b08-31d8-4b59-930a-0b0ecd1e95ed stack at ubuntu:~/osprofiler/osprofiler/drivers$ osprofiler trace show --html c7d15b08-31d8-4b59-930a-0b0ecd1e95ed Trace with UUID c7d15b08-31d8-4b59-930a-0b0ecd1e95ed not found. Please check the HMAC key used in the command. The versions of devstack, openstack, osprofiler are all pike. OS is ubuntu 16.04. I have tried osprofiler 1.11.0 and 1.14.0 but they have the same problem. Looking at the configurations in /etc/ceilometer/pipeline.yaml and event_pipeline.yaml, the publishers is gnocchi://. When I change it as a file publishers, I find that messages in pipeline were published periodically while no events in event_pipeline were published. Trace the instruction “osprofiler trace **”, the following request is sent but received NULL. "GET /v2/events?q.field=base_id&q.op=eq&q.type=&q.value=307b7b6e-78c5-4df3-8b82-190ae50e59c1&limit=100000 HTTP/1.1" 200 226 "-" ceilometer client.apiclient" My local.conf is [[local|localrc]] ADMIN_PASSWORD=secret DATABASE_PASSWORD=$ADMIN_PASSWORD RABBIT_PASSWORD=$ADMIN_PASSWORD SERVICE_PASSWORD=$ADMIN_PASSWORD SERVICE_TOKEN=super-secret-admin-token HOST_IP=10.67.247.40 LOGFILE=$HOME/logs/stack.sh.log LOGDIR=$HOME/logs VERBOSE=true LOG_COLOR=true # This enables Neutron, because that's how I roll. disable_service n-net enable_service q-svc enable_service q-agt enable_service q-dhcp enable_service q-l3 enable_service q-meta disable_service tempest enable_plugin ceilometer http://git.openstack.org/openstack/ceilometer.git stable/pike enable_plugin osprofiler http://git.openstack.org/openstack/osprofiler stable/pike enable_plugin gnocchi https://github.com/gnocchixyz/gnocchi.git master enable_service gnocchi-api gnocchi-metricd /etc/ceilometer/ceilometer.conf is like [DEFAULT] debug = True transport_url = rabbit://stackrabbit:secret at 10.67.247.40:5672/ [oslo_messaging_notifications] topics = notifications,profiler [coordination] backend_url = redis://localhost:6379 [notification] workers = 5 workload_partitioning = True [cache] backend_argument = url:redis://localhost:6379 backend_argument = distributed_lock:True backend_argument = db:0 backend_argument = redis_expiration_time:600 backend = dogpile.cache.redis enabled = True [service_credentials] auth_url = http://10.67.247.40/identity region_name = RegionOne password = secret username = ceilometer project_name = service project_domain_id = default user_domain_id = default auth_type = password [keystone_authtoken] memcached_servers = 10.67.247.40:11211 signing_dir = /var/cache/ceilometer cafile = /opt/stack/data/ca-bundle.pem project_domain_name = Default project_name = service user_domain_name = Default password = secret username = ceilometer auth_url = http://10.67.247.40/identity auth_type = password [event] store_raw = info Something must go wrong. Can anybody help me ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From ambadiaravind at gmail.com Wed Feb 7 09:58:37 2018 From: ambadiaravind at gmail.com (aRaviNd) Date: Wed, 7 Feb 2018 15:28:37 +0530 Subject: [Openstack] Not able to upload files to openstack swift Message-ID: Hi All, We have created an openstack cluster with one proxy server and three storage nodes. Configuration consist of two regions and three zones. [image: enter image description here] We are able to create containers [image: enter image description here] But while trying to upload files we are getting 503 service unavailable and seeing below logs in swift.log [image: enter image description here] - Aravind -------------- next part -------------- An HTML attachment was scrubbed... URL: From raffaele.montella at uniparthenope.it Wed Feb 7 15:00:40 2018 From: raffaele.montella at uniparthenope.it (Raffaele Montella) Date: Wed, 7 Feb 2018 16:00:40 +0100 Subject: [Openstack] Mobile application offloading for both CPU and GPU on OpenStack In-Reply-To: <4FBE811EBF224C46A4D67DE05FBC6AD301309ED2@dggeml510-mbx.china.huawei.com> References: <4FBE811EBF224C46A4D67DE05FBC6AD301309ED2@dggeml510-mbx.china.huawei.com> Message-ID: Dears, I hope you could find interesting the results delivered by the H2020 RAPID Project https://github.com/RapidProjectH2020 http://rapid-project.eu In very short, it enables Android applications to be offloaded to the cloud (we use openStack) in order to save battery and earn computing power. It works with both CPU and GPGPU code enabling CUDA for any regular Android device. Cheers, Raffaele -- ----------------------- Raffaele Montella, PhD Assistant Professor in Computer Science Department of Science and Technology University of Napoli Parthenope CDN Isola C4 - 80143 - Napoli - Italy Tel: +39 081 5476672 Mob: +39 339 3055922 -------------- next part -------------- An HTML attachment was scrubbed... URL: From allison at openstack.org Wed Feb 7 20:33:33 2018 From: allison at openstack.org (Allison Price) Date: Wed, 7 Feb 2018 14:33:33 -0600 Subject: [Openstack] Mobile application offloading for both CPU and GPU on OpenStack In-Reply-To: References: <4FBE811EBF224C46A4D67DE05FBC6AD301309ED2@dggeml510-mbx.china.huawei.com> Message-ID: <5F23BA44-006A-4D28-BCBF-A1DAF7DAF9AC@openstack.org> Hi Raffaele, Thank you for sharing this use case. I don’t know if you have seen, but we have changed the way that we are organizing content for the upcoming Vancouver Summit , and I think that this use case would be a great fit for either the Edge Computing or the HPC/GPU/AI track. The deadline to submit is tomorrow, February 8 at 11:59pm PST. Please let me know if you have any questions around the content structure or the deadline, and I would be more than happy to help. Cheers, Allison Allison Price OpenStack Foundation allison at openstack.org > On Feb 7, 2018, at 2:21 PM, Mark Collier wrote: > > From: "Raffaele Montella" > > Date: Feb 7, 2018 9:07 AM > Subject: [Openstack] Mobile application offloading for both CPU and GPU on OpenStack > To: "openstack at lists.openstack.org " > > Cc: > > Dears, > I hope you could find interesting the results delivered by the H2020 RAPID Project https://github.com/RapidProjectH2020 http://rapid-project.eu > In very short, it enables Android applications to be offloaded to the cloud (we use openStack) in order to save battery and earn computing power. > It works with both CPU and GPGPU code enabling CUDA for any regular Android device. > Cheers, > Raffaele > -- > ----------------------- > Raffaele Montella, PhD > Assistant Professor in Computer Science > Department of Science and Technology > University of Napoli Parthenope > CDN Isola C4 - 80143 - Napoli - Italy > Tel: +39 081 5476672 > Mob: +39 339 3055922 > > > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Post to : openstack at lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > -------------- next part -------------- An HTML attachment was scrubbed... URL: From clay.gerrard at gmail.com Wed Feb 7 21:13:21 2018 From: clay.gerrard at gmail.com (Clay Gerrard) Date: Wed, 7 Feb 2018 13:13:21 -0800 Subject: [Openstack] Not able to upload files to openstack swift In-Reply-To: References: Message-ID: One replica is a little strange. Do the uploads *always* fail - in the same way? Or is this just one example of a PUT that returned 503? Are you doing a lot of concurrent PUTs to the same object/name/disk? The error from the log (EPIPE) means the object-server closed the connection as the proxy was writing to it... which is a little strange. There should be a corresponding exception/error from the object-server service - you should make sure the object-servers are running and find where they are logging - then grep all the logs for the transaction-id to get a better picture of the whole distributed transaction. If you keep digging I know you can find the problem. Let us know what you find. Good luck, -Clay On Wed, Feb 7, 2018 at 1:58 AM, aRaviNd wrote: > Hi All, > > We have created an openstack cluster with one proxy server and three > storage nodes. Configuration consist of two regions and three zones. > > [image: enter image description here] > > > We are able to create containers > > [image: enter image description here] > > > But while trying to upload files we are getting 503 service unavailable > and seeing below logs in swift.log > > [image: enter image description here] > > - Aravind > > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/ > openstack > Post to : openstack at lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/ > openstack > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From doka.ua at gmx.com Thu Feb 8 20:53:14 2018 From: doka.ua at gmx.com (Volodymyr Litovka) Date: Thu, 8 Feb 2018 22:53:14 +0200 Subject: [Openstack] diskimage-builder: prepare ubuntu 17.x images Message-ID: <5ff88bac-2298-7ba2-8f04-46263aee4693@gmx.com> Hi colleagues, does anybody here know how to prepare Ubuntu Artful (17.10) image using diskimage-builder? diskimage-builder use the following naming style for download - $DIB_RELEASE-server-cloudimg-$ARCH-root.tar.gz and while "-root" names are there for trusty/amd64 and xenial/amd64 distros, these archives for artful (and bionic) are absent on cloud-images.ubuntu.com. There are just different kinds of images, not source tree as in -root archives. I will appreciate any ideas or knowledge how to customize 17.10-based image using diskimage-builder or in diskimage-builder-like fashion. Thanks! -- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at medberry.net Thu Feb 8 21:32:25 2018 From: openstack at medberry.net (David Medberry) Date: Thu, 8 Feb 2018 14:32:25 -0700 Subject: [Openstack] [Openstack-operators] diskimage-builder: prepare ubuntu 17.x images In-Reply-To: <5ff88bac-2298-7ba2-8f04-46263aee4693@gmx.com> References: <5ff88bac-2298-7ba2-8f04-46263aee4693@gmx.com> Message-ID: Subscribe to this bug and click the "This affects me." link near the top. https://bugs.launchpad.net/cloud-images/+bug/1585233 On Thu, Feb 8, 2018 at 1:53 PM, Volodymyr Litovka wrote: > Hi colleagues, > > does anybody here know how to prepare Ubuntu Artful (17.10) image using > diskimage-builder? > > diskimage-builder use the following naming style for download - > $DIB_RELEASE-server-cloudimg-$ARCH-root.tar.gz > > and while "-root" names are there for trusty/amd64 and xenial/amd64 > distros, these archives for artful (and bionic) are absent on > cloud-images.ubuntu.com. There are just different kinds of images, not > source tree as in -root archives. > > I will appreciate any ideas or knowledge how to customize 17.10-based > image using diskimage-builder or in diskimage-builder-like fashion. > > Thanks! > > -- > Volodymyr Litovka > "Vision without Execution is Hallucination." -- Thomas Edison > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tony at bakeyournoodle.com Fri Feb 9 04:01:47 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Fri, 9 Feb 2018 15:01:47 +1100 Subject: [Openstack] diskimage-builder: prepare ubuntu 17.x images In-Reply-To: <5ff88bac-2298-7ba2-8f04-46263aee4693@gmx.com> References: <5ff88bac-2298-7ba2-8f04-46263aee4693@gmx.com> Message-ID: <20180209040147.GR23143@thor.bakeyournoodle.com> On Thu, Feb 08, 2018 at 10:53:14PM +0200, Volodymyr Litovka wrote: > Hi colleagues, > > does anybody here know how to prepare Ubuntu Artful (17.10) image using > diskimage-builder? > > diskimage-builder use the following naming style for download - > $DIB_RELEASE-server-cloudimg-$ARCH-root.tar.gz > > and while "-root" names are there for trusty/amd64 and xenial/amd64 distros, > these archives for artful (and bionic) are absent on > cloud-images.ubuntu.com. There are just different kinds of images, not > source tree as in -root archives. > > I will appreciate any ideas or knowledge how to customize 17.10-based image > using diskimage-builder or in diskimage-builder-like fashion. You might like to investigate the ubuntu-minimal DIB element which will build your ubuntu image from apt rather than starting with the pre-built image. In the meantime I'll look at how we can consume the .img file, which is similar to what we'd need to do for Fedora Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From doka.ua at gmx.com Fri Feb 9 11:03:40 2018 From: doka.ua at gmx.com (Volodymyr Litovka) Date: Fri, 9 Feb 2018 13:03:40 +0200 Subject: [Openstack] diskimage-builder: prepare ubuntu 17.x images In-Reply-To: <20180209040147.GR23143@thor.bakeyournoodle.com> References: <5ff88bac-2298-7ba2-8f04-46263aee4693@gmx.com> <20180209040147.GR23143@thor.bakeyournoodle.com> Message-ID: <687756ea-1989-418d-73d7-614a501078d7@gmx.com> Hi Tony, On 2/9/18 6:01 AM, Tony Breeds wrote: > On Thu, Feb 08, 2018 at 10:53:14PM +0200, Volodymyr Litovka wrote: >> Hi colleagues, >> >> does anybody here know how to prepare Ubuntu Artful (17.10) image using >> diskimage-builder? >> >> diskimage-builder use the following naming style for download - >> $DIB_RELEASE-server-cloudimg-$ARCH-root.tar.gz >> >> and while "-root" names are there for trusty/amd64 and xenial/amd64 distros, >> these archives for artful (and bionic) are absent on >> cloud-images.ubuntu.com. There are just different kinds of images, not >> source tree as in -root archives. >> >> I will appreciate any ideas or knowledge how to customize 17.10-based image >> using diskimage-builder or in diskimage-builder-like fashion. > You might like to investigate the ubuntu-minimal DIB element which will > build your ubuntu image from apt rather than starting with the pre-built > image. good idea, but with export DIST="ubuntu-minimal" export DIB_RELEASE=artful diskimage-builder fails with the following debug: 2018-02-09 10:33:22.426 | dib-run-parts Sourcing environment file /tmp/in_target.d/pre-install.d/../environment.d/10-dib-init-system.bash 2018-02-09 10:33:22.427 | + source /tmp/in_target.d/pre-install.d/../environment.d/10-dib-init-system.bash 2018-02-09 10:33:22.427 | ++++ dirname /tmp/in_target.d/pre-install.d/../environment.d/10-dib-init-system.bash 2018-02-09 10:33:22.428 | +++ PATH='$PATH:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/tmp/in_target.d/pre-install.d/../environment.d/..' 2018-02-09 10:33:22.428 | +++ dib-init-system 2018-02-09 10:33:22.429 | + set -eu 2018-02-09 10:33:22.429 | + set -o pipefail 2018-02-09 10:33:22.429 | + '[' -f /usr/bin/systemctl -o -f /bin/systemctl ']' 2018-02-09 10:33:22.429 | + [[ -f /sbin/initctl ]] 2018-02-09 10:33:22.429 | + [[ -f /etc/gentoo-release ]] 2018-02-09 10:33:22.429 | + [[ -f /sbin/init ]] 2018-02-09 10:33:22.429 | + echo 'Unknown init system' 2018-02-09 10:36:54.852 | + exit 1 2018-02-09 10:36:54.853 | ++ DIB_INIT_SYSTEM='Unknown init system while earlier it find systemd 2018-02-09 10:33:22.221 | dib-run-parts Sourcing environment file /tmp/dib_build.fJUf6F4d/hooks/extra-data.d/../environment.d/10-dib-init-system.bash 2018-02-09 10:33:22.223 | + source /tmp/dib_build.fJUf6F4d/hooks/extra-data.d/../environment.d/10-dib-init-system.bash 2018-02-09 10:33:22.223 | ++++ dirname /tmp/dib_build.fJUf6F4d/hooks/extra-data.d/../environment.d/10-dib-init-system.bash 2018-02-09 10:33:22.224 | +++ PATH=/home/doka/DIB/dib/bin:/home/doka/DIB/dib/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin:/tmp/dib_build.fJUf6F4d/hooks/extra-data.d/../environment.d/.. 2018-02-09 10:33:22.224 | +++ dib-init-system 2018-02-09 10:33:22.225 | + set -eu 2018-02-09 10:33:22.225 | + set -o pipefail 2018-02-09 10:33:22.225 | + '[' -f /usr/bin/systemctl -o -f /bin/systemctl ']' 2018-02-09 10:33:22.225 | + echo systemd 2018-02-09 10:33:22.226 | ++ DIB_INIT_SYSTEM=systemd 2018-02-09 10:33:22.226 | ++ export DIB_INIT_SYSTEM it seems somewhere in the middle something happens to systemd package > In the meantime I'll look at how we can consume the .img file, which is > similar to what we'd need to do for Fedora script diskimage-builder/diskimage_builder/elements/ubuntu/root.d/10-cache-ubuntu-tarball contains the function get_ubuntu_tarball() which, after all checks, does the following: sudo tar -C $TARGET_ROOT --numeric-owner -xzf $DIB_IMAGE_CACHE/$BASE_IMAGE_FILE probably, the easiest hack around the issue is to change above to smth like sudo ( mount -o loop tar cv  | tar xv -C $TARGET_ROOT ... ) Will try this. -- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison -------------- next part -------------- An HTML attachment was scrubbed... URL: From doka.ua at gmx.com Fri Feb 9 13:48:53 2018 From: doka.ua at gmx.com (Volodymyr Litovka) Date: Fri, 9 Feb 2018 15:48:53 +0200 Subject: [Openstack] diskimage-builder: prepare ubuntu 17.x images In-Reply-To: <687756ea-1989-418d-73d7-614a501078d7@gmx.com> References: <5ff88bac-2298-7ba2-8f04-46263aee4693@gmx.com> <20180209040147.GR23143@thor.bakeyournoodle.com> <687756ea-1989-418d-73d7-614a501078d7@gmx.com> Message-ID: Hi Tony, this patch works for me: --- diskimage-builder/diskimage_builder/elements/ubuntu/root.d/10-cache-ubuntu-tarball.orig 2018-02-09 12:20:02.117793234 +0000 +++ diskimage-builder/diskimage_builder/elements/ubuntu/root.d/10-cache-ubuntu-tarball 2018-02-09 13:25:48.654868263 +0000 @@ -14,7 +14,9 @@  DIB_CLOUD_IMAGES=${DIB_CLOUD_IMAGES:-http://cloud-images.ubuntu.com}  DIB_RELEASE=${DIB_RELEASE:-trusty} -BASE_IMAGE_FILE=${BASE_IMAGE_FILE:-$DIB_RELEASE-server-cloudimg-$ARCH-root.tar.gz} +SUFFIX="-root" +[[ $DIB_RELEASE =~ (artful|bionic) ]] && SUFFIX="" +BASE_IMAGE_FILE=${BASE_IMAGE_FILE:-$DIB_RELEASE-server-cloudimg-${ARCH}${SUFFIX}.tar.gz}  SHA256SUMS=${SHA256SUMS:-https://${DIB_CLOUD_IMAGES##http?(s)://}/$DIB_RELEASE/current/SHA256SUMS}  CACHED_FILE=$DIB_IMAGE_CACHE/$BASE_IMAGE_FILE  CACHED_FILE_LOCK=$DIB_LOCKFILES/$BASE_IMAGE_FILE.lock @@ -45,9 +47,25 @@          fi          popd      fi -    # Extract the base image (use --numeric-owner to avoid UID/GID mismatch between -    # image tarball and host OS e.g. when building Ubuntu image on an openSUSE host) -    sudo tar -C $TARGET_ROOT --numeric-owner -xzf $DIB_IMAGE_CACHE/$BASE_IMAGE_FILE +    if [ -n "$SUFFIX" ]; then +      # Extract the base image (use --numeric-owner to avoid UID/GID mismatch between +      # image tarball and host OS e.g. when building Ubuntu image on an openSUSE host) +      sudo tar -C $TARGET_ROOT --numeric-owner -xzf $DIB_IMAGE_CACHE/$BASE_IMAGE_FILE +    else +      # Unpack image to IDIR, mount it on MDIR, copy it to TARGET_ROOT +      IDIR="/tmp/`head /dev/urandom | tr -dc A-Za-z0-9 | head -c 13 ; echo ''`" +      MDIR="/tmp/`head /dev/urandom | tr -dc A-Za-z0-9 | head -c 13 ; echo ''`" +      sudo mkdir $IDIR $MDIR +      sudo tar -C $IDIR --numeric-owner -xzf $DIB_IMAGE_CACHE/$BASE_IMAGE_FILE +      sudo mount -o loop -t auto $IDIR/$DIB_RELEASE-server-cloudimg-${ARCH}.img $MDIR +      pushd $PWD 2>/dev/null +      cd $MDIR +      sudo tar c . | sudo tar x -C $TARGET_ROOT -k --numeric-owner 2>/dev/null +      popd +      # Clean up +      sudo umount $MDIR +      sudo rm -rf $IDIR $MDIR +    fi  }  ( On 2/9/18 1:03 PM, Volodymyr Litovka wrote: > Hi Tony, > > On 2/9/18 6:01 AM, Tony Breeds wrote: >> On Thu, Feb 08, 2018 at 10:53:14PM +0200, Volodymyr Litovka wrote: >>> Hi colleagues, >>> >>> does anybody here know how to prepare Ubuntu Artful (17.10) image using >>> diskimage-builder? >>> >>> diskimage-builder use the following naming style for download - >>> $DIB_RELEASE-server-cloudimg-$ARCH-root.tar.gz >>> >>> and while "-root" names are there for trusty/amd64 and xenial/amd64 distros, >>> these archives for artful (and bionic) are absent on >>> cloud-images.ubuntu.com. There are just different kinds of images, not >>> source tree as in -root archives. >>> >>> I will appreciate any ideas or knowledge how to customize 17.10-based image >>> using diskimage-builder or in diskimage-builder-like fashion. >> You might like to investigate the ubuntu-minimal DIB element which will >> build your ubuntu image from apt rather than starting with the pre-built >> image. > good idea, but with > > export DIST="ubuntu-minimal" > export DIB_RELEASE=artful > > diskimage-builder fails with the following debug: > > 2018-02-09 10:33:22.426 | dib-run-parts Sourcing environment file > /tmp/in_target.d/pre-install.d/../environment.d/10-dib-init-system.bash > 2018-02-09 10:33:22.427 | + source > /tmp/in_target.d/pre-install.d/../environment.d/10-dib-init-system.bash > 2018-02-09 10:33:22.427 | ++++ dirname > /tmp/in_target.d/pre-install.d/../environment.d/10-dib-init-system.bash > 2018-02-09 10:33:22.428 | +++ > PATH='$PATH:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/tmp/in_target.d/pre-install.d/../environment.d/..' > 2018-02-09 10:33:22.428 | +++ dib-init-system > 2018-02-09 10:33:22.429 | + set -eu > 2018-02-09 10:33:22.429 | + set -o pipefail > 2018-02-09 10:33:22.429 | + '[' -f /usr/bin/systemctl -o -f > /bin/systemctl ']' > 2018-02-09 10:33:22.429 | + [[ -f /sbin/initctl ]] > 2018-02-09 10:33:22.429 | + [[ -f /etc/gentoo-release ]] > 2018-02-09 10:33:22.429 | + [[ -f /sbin/init ]] > 2018-02-09 10:33:22.429 | + echo 'Unknown init system' > 2018-02-09 10:36:54.852 | + exit 1 > 2018-02-09 10:36:54.853 | ++ DIB_INIT_SYSTEM='Unknown init system > > while earlier it find systemd > > 2018-02-09 10:33:22.221 | dib-run-parts Sourcing environment file > /tmp/dib_build.fJUf6F4d/hooks/extra-data.d/../environment.d/10-dib-init-system.bash > 2018-02-09 10:33:22.223 | + source > /tmp/dib_build.fJUf6F4d/hooks/extra-data.d/../environment.d/10-dib-init-system.bash > 2018-02-09 10:33:22.223 | ++++ dirname > /tmp/dib_build.fJUf6F4d/hooks/extra-data.d/../environment.d/10-dib-init-system.bash > 2018-02-09 10:33:22.224 | +++ > PATH=/home/doka/DIB/dib/bin:/home/doka/DIB/dib/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin:/tmp/dib_build.fJUf6F4d/hooks/extra-data.d/../environment.d/.. > 2018-02-09 10:33:22.224 | +++ dib-init-system > 2018-02-09 10:33:22.225 | + set -eu > 2018-02-09 10:33:22.225 | + set -o pipefail > 2018-02-09 10:33:22.225 | + '[' -f /usr/bin/systemctl -o -f > /bin/systemctl ']' > 2018-02-09 10:33:22.225 | + echo systemd > 2018-02-09 10:33:22.226 | ++ DIB_INIT_SYSTEM=systemd > 2018-02-09 10:33:22.226 | ++ export DIB_INIT_SYSTEM > > it seems somewhere in the middle something happens to systemd package >> In the meantime I'll look at how we can consume the .img file, which is >> similar to what we'd need to do for Fedora > script > diskimage-builder/diskimage_builder/elements/ubuntu/root.d/10-cache-ubuntu-tarball > contains the function get_ubuntu_tarball() which, after all checks, > does the following: > > sudo tar -C $TARGET_ROOT --numeric-owner -xzf > $DIB_IMAGE_CACHE/$BASE_IMAGE_FILE > > probably, the easiest hack around the issue is to change above to smth > like > > sudo ( > mount -o loop > tar cv  | tar xv -C $TARGET_ROOT ... > ) > > Will try this. > > -- > Volodymyr Litovka > "Vision without Execution is Hallucination." -- Thomas Edison -- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison -------------- next part -------------- An HTML attachment was scrubbed... URL: From guoyongxhzhf at 163.com Fri Feb 9 15:34:09 2018 From: guoyongxhzhf at 163.com (guoyongxhzhf at 163.com) Date: Fri, 9 Feb 2018 23:34:09 +0800 Subject: [Openstack] [openstack][openstackweb] can not registe new user at openstack.org Message-ID: <80DBF3CA03614C9AA35298FB939954AD@guoyongPC> it always tips "Please confirm that you are not a robots". But there is no way to confirm. -------------- next part -------------- An HTML attachment was scrubbed... URL: From eblock at nde.ag Fri Feb 9 15:48:28 2018 From: eblock at nde.ag (Eugen Block) Date: Fri, 09 Feb 2018 15:48:28 +0000 Subject: [Openstack] Ocata Created Ports Strange Issue In-Reply-To: <94d7c8d202eeb6516006036aba889faa@acmac.uoc.gr> Message-ID: <20180209154828.Horde.AD5Pnc3umP5TFhZ-HUJtOQt@webmail.nde.ag> Hi, my input on this is very limited, but I believe we had a similar issue in our Ocata cloud. My workaround was like yours, detach the assigned port, recreate it and attach it again. The only strange thing was, that when I wanted to delete the port, the CLI reported that the port didn't exist, it literally disappeared! I didn't spend very much time to debug it because it has not happened since then. And if I remember correctly, it occured around our large migration, where we upgraded our Ceph backend to the latest version, upgraded the OS of all nodes and also the cloud from Mitaka to Ocata (via Newton), it could have been a side effect of that, at least that was my hope. So as I said, this is not of big help, but I can confirm your observation, unfortunately without any pointers to the cause. If this happens again, I will definitely spend more time on debugging! ;-) Regards, Eugen Zitat von Georgios Dimitrakakis : > Dear all, > > I have a small Ocata installation (1x controller + 2x compute nodes) > on which I have manually created 5 network ports and afterwards each > one of these ports is assigned to a specific instance (4Linux VMs > and 1Windows). All these instances are located on one physical > hypervisor (compute node) while the controller is also the > networking node. > > The other day we had to do system maintenance and all hosts (compute > and network/controller) were powered off but before that we > gracefully shutoff all running VMs. > > As soon as maintenance finished we powered on everything and I met > the following strange issue... Instances with an attached port were > trying for very long time to get an IP from the DHCP server but they > all manage to get one eventually with the exception of the Windows > VM on which I had to assign it statically. Restarting networking > services on controller/network and/or compute node didn't make any > difference. On the other hand all newly spawned instances didn't > have any problem no matter on which compute node they were spawned > and their only difference was that they were automatically getting > ports assigned. All the above happened on Friday and today (Monday) > people were complaining that the Linux VMs didn't have network > connectivity (Windows was working...), so I don't know the exact > time the issue occured. I have tried to access all VMs using the > "self-service" network by spawning a new instance unfortunately > without success. The instance was successfully spawned, it had > network connectivity but couldn't reach any of the afforementioned > VMs. > > What I did finally and solved the problem was to detach interfaces, > deleted ports, re-created new ports with same IP address etc. and > re-attached them to the VMs. As soon as I did that networking > connectivity was back to normal without even having to restart the > VMs. > > Unfortunately I couldn't find any helpful information regarding this > in the logs and I am wondering has anyone seen or experienced > something similar? > > Best regards, > > G. > > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Post to : openstack at lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack -- Eugen Block voice : +49-40-559 51 75 NDE Netzdesign und -entwicklung AG fax : +49-40-559 51 77 Postfach 61 03 15 D-22423 Hamburg e-mail : eblock at nde.ag Vorsitzende des Aufsichtsrates: Angelika Mozdzen Sitz und Registergericht: Hamburg, HRB 90934 Vorstand: Jens-U. Mozdzen USt-IdNr. DE 814 013 983 From jimmy at openstack.org Fri Feb 9 16:48:21 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Fri, 09 Feb 2018 10:48:21 -0600 Subject: [Openstack] [openstack][openstackweb] can not registe new user at openstack.org In-Reply-To: <0BE5A4C4-0A81-461C-9457-50C2EE46EFEA@openstack.org> References: <80DBF3CA03614C9AA35298FB939954AD@guoyongPC> <0BE5A4C4-0A81-461C-9457-50C2EE46EFEA@openstack.org> Message-ID: <5A7DD0D5.8080906@openstack.org> Hi - I'm sorry for the troubles you're having in registering as a Foundation Member. If you could email me directly (jimmy at openstack.org), I'd be happy to help you with the issue and get your membership set up. Thank you, Jimmy Begin forwarded message: *From: *> *Subject: **[Openstack] [openstack][openstackweb] can not registe new user at openstack.org * *Date: *February 9, 2018 at 9:34:09 AM CST *To: *"openstack" > it always tips "Please confirm that you are not a robots". But there is no way to confirm. _______________________________________________ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : openstack at lists.openstack.org Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack -------------- next part -------------- An HTML attachment was scrubbed... URL: From navdeep.uniyal at bristol.ac.uk Fri Feb 9 17:32:50 2018 From: navdeep.uniyal at bristol.ac.uk (Navdeep Uniyal) Date: Fri, 9 Feb 2018 17:32:50 +0000 Subject: [Openstack] Routing in connected VMs Message-ID: Dear all, I am trying to create a network chain manually of VMs in openstack by entering the routing table entries in the machines. However, it doesn't seems to work. The network between the VMs is VLAN network on top of provider network. Scenario is like : A-->B-->C I have enabled the IP forwarding on the node B but I am not able to ping from A to C. ICMP echo requests are not reaching C. I am able to ping from B to A and B to C. Please if someone could help in this regard. Best Regards, Navdeep -------------- next part -------------- An HTML attachment was scrubbed... URL: From james.denton at rackspace.com Sun Feb 11 20:59:22 2018 From: james.denton at rackspace.com (James Denton) Date: Sun, 11 Feb 2018 20:59:22 +0000 Subject: [Openstack] Routing in connected VMs Message-ID: <73F4FBB7-D100-456C-B6B8-F29148DC185D@rackspace.com> Hi Navdeep, To get this to work, you will need to disable port security on the B device’s ports, or at a minimum, modify the allowed-address-pairs on the port to allow the traffic out towards C. Disabling port security is typically the way to go about satisfying this particular use case. James From: Navdeep Uniyal Date: Friday, February 9, 2018 at 12:40 PM To: OpenStack Mailing List Subject: [Openstack] Routing in connected VMs Dear all, I am trying to create a network chain manually of VMs in openstack by entering the routing table entries in the machines. However, it doesn’t seems to work. The network between the VMs is VLAN network on top of provider network. Scenario is like : A-->B-->C I have enabled the IP forwarding on the node B but I am not able to ping from A to C. ICMP echo requests are not reaching C. I am able to ping from B to A and B to C. Please if someone could help in this regard. Best Regards, Navdeep -------------- next part -------------- An HTML attachment was scrubbed... URL: From navdeep.uniyal at bristol.ac.uk Sun Feb 11 21:37:10 2018 From: navdeep.uniyal at bristol.ac.uk (Navdeep Uniyal) Date: Sun, 11 Feb 2018 21:37:10 +0000 Subject: [Openstack] Routing in connected VMs In-Reply-To: <73F4FBB7-D100-456C-B6B8-F29148DC185D@rackspace.com> References: <73F4FBB7-D100-456C-B6B8-F29148DC185D@rackspace.com> Message-ID: Thanks James, I could figure out and resolved it by disabling port security. Regards, Navdeep ________________________________ From: James Denton Sent: Sunday, February 11, 2018 8:59:22 PM To: Navdeep Uniyal; OpenStack Mailing List Subject: Re: [Openstack] Routing in connected VMs Hi Navdeep, To get this to work, you will need to disable port security on the B device’s ports, or at a minimum, modify the allowed-address-pairs on the port to allow the traffic out towards C. Disabling port security is typically the way to go about satisfying this particular use case. James From: Navdeep Uniyal Date: Friday, February 9, 2018 at 12:40 PM To: OpenStack Mailing List Subject: [Openstack] Routing in connected VMs Dear all, I am trying to create a network chain manually of VMs in openstack by entering the routing table entries in the machines. However, it doesn’t seems to work. The network between the VMs is VLAN network on top of provider network. Scenario is like : A-->B-->C I have enabled the IP forwarding on the node B but I am not able to ping from A to C. ICMP echo requests are not reaching C. I am able to ping from B to A and B to C. Please if someone could help in this regard. Best Regards, Navdeep -------------- next part -------------- An HTML attachment was scrubbed... URL: From xiefp88 at sina.com Mon Feb 12 06:24:44 2018 From: xiefp88 at sina.com (xiefp88 at sina.com) Date: Mon, 12 Feb 2018 14:24:44 +0800 Subject: [Openstack] =?gbk?b?u9i4tKO6UmU6ICBbaXJvbmljXSBob3cgdG8gcHJldmVu?= =?gbk?q?t_ironic_user_to_controle_ipmi_through_OS=3F?= Message-ID: <20180212062444.2DAB910200EA@webmail.sinamail.sina.com.cn> I am trying to make an ironic image for windows with windows-openstack-imaging-tools. But I got warnning like this: 按规定设备无效。 运行 "bcdedit /?" 获得命令行帮助。 参数错误。 警告: BCDEdit failed: bootmgr device locate 按规定设备无效。 运行 "bcdedit /?" 获得命令行帮助。 参数错误。 警告: BCDEdit failed: default device locate 按规定设备无效。 运行 "bcdedit /?" 获得命令行帮助。 参数错误。 警告: BCDEdit failed: default osdevice locate And I try to excute "F:\windows\system32\bcdedit.exe /store F:\boot\bcd /set `{bootmgr`} device locate", the error occurs. "F:\windows\system32\bcdedit.exe /store F:\boot\bcd /set `{default`} device locate" "F:\windows\system32\bcdedit.exe /store F:\boot\bcd /set `{default`} osdevice locate" meet the same error. It looks like the parameter is not supported. But the command is from WinImageBuilder.psm1 in windows-openstack-imaging-tools. Has anyone meet this warning? -------------- next part -------------- An HTML attachment was scrubbed... URL: From xiefp88 at sina.com Mon Feb 12 07:02:24 2018 From: xiefp88 at sina.com (xiefp88 at sina.com) Date: Mon, 12 Feb 2018 15:02:24 +0800 Subject: [Openstack] bcdedit warning in windows-openstack-imaging-tools Message-ID: <20180212070224.C82A52A00D9@webmail.sinamail.sina.com.cn> I am trying to make an ironic image for windows with windows-openstack-imaging-tools. But I got warnning like this: 按规定设备无效。 运行 "bcdedit /?" 获得命令行帮助。 参数错误。 警告: BCDEdit failed: bootmgr device locate 按规定设备无效。 运行 "bcdedit /?" 获得命令行帮助。 参数错误。 警告: BCDEdit failed: default device locate 按规定设备无效。 运行 "bcdedit /?" 获得命令行帮助。 参数错误。 警告: BCDEdit failed: default osdevice locate And I try to excute "F:\windows\system32\bcdedit.exe /store F:\boot\bcd /set `{bootmgr`} device locate", the error occurs. "F:\windows\system32\bcdedit.exe /store F:\boot\bcd /set `{default`} device locate" "F:\windows\system32\bcdedit.exe /store F:\boot\bcd /set `{default`} osdevice locate" meet the same error. It looks like the parameter is not supported. But the command is from WinImageBuilder.psm1 in windows-openstack-imaging-tools. Has anyone meet this warning? -------------- next part -------------- An HTML attachment was scrubbed... URL: From ajabbar2014 at namal.edu.pk Mon Feb 12 09:57:01 2018 From: ajabbar2014 at namal.edu.pk (Abdul Malik) Date: Mon, 12 Feb 2018 14:57:01 +0500 Subject: [Openstack] Fwd: Port Binding between OpenStack nova VM and ODL network port In-Reply-To: References: Message-ID: ---------- Forwarded message ---------- From: Abdul Malik Date: 12 February 2018 at 14:51 Subject: Port Binding between OpenStack nova VM and ODL network port To: openstack at lists.openstack.org Hello all, I am working on SDN controllers with OpenStack and I want to connect a VM launched by Nova to a L3VPN port created in OpenDayLight controller. I have managed to connect this port to a VM in nova but how do I tell OpenDayLight that this port is connected to a VM located on a particular host with id . Do I just need to update the attributes of that port in OpenDayLight or is there something else I need to do to configure OpenDayLight i-e attributes of a port are "port": [ { "uuid": "153f734e-396a-4201-b3ab-f16d08140504", "tenant-id": "13d103ed-9d4b-4a5c-82bd-e34c68c7c3c5", "network-id": "94c796b9-4e59-45e5-ba34-ce9d2c77bfa8", "fixed-ips": [ { "subnet-id": "f877cea0-6ff9-42cf-86f0-afb390f32017", "ip-address": "172.18.0.2" } ], "neutron-binding:vif-type": "ovs", "neutron-binding:vif-details": [ {} ], "neutron-binding:vnic-type": "normal", "device-owner": "compute:None", "name": "test_port", "admin-state-up": true, "mac-address": "fa:16:3e:b7:38:25" } ] Do I just need to add "host_id" and "device_id" in above information or something more. And if I am not getting things right guidance would be really appreciable. Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From ambadiaravind at gmail.com Mon Feb 12 11:12:41 2018 From: ambadiaravind at gmail.com (aRaviNd) Date: Mon, 12 Feb 2018 16:42:41 +0530 Subject: [Openstack] Not able to upload files to openstack swift In-Reply-To: References: Message-ID: Hi Clay, Hi All, Configured my cluster from ground up with one proxy node and one storage node. ​ Now I am getting two types of errors. Large files: Feb 3 18:12:33 centos7-swift-proxy1 swift-proxy-server: ERROR with Object server 192.168.47.128:6201/sda re: Trying to write to /AUTH_admin/ara1/abc: #012Traceback (most recent call last):#012 File "/usr/lib/python2.7/site-packages/swift/proxy/controllers/obj.py", line 1617, in _send_file#012 self.conn.send(to_send)#012 File "/usr/lib64/python2.7/httplib.py", line 840, in send#012 self.sock.sendall(data)#012 File "/usr/lib/python2.7/site-packages/eventlet/greenio/base.py", line 393, in sendall#012 tail += self.send(data[tail:], flags)#012 File "/usr/lib/python2.7/site-packages/eventlet/greenio/base.py", line 384, in send#012 return self._send_loop(self.fd.send, data, flags)#012 File "/usr/lib/python2.7/site-packages/eventlet/greenio/base.py", line 371, in _send_loop#012 return send_method(data, *args)#012error: [Errno 32] Broken pipe Feb 3 18:12:33 centos7-swift-proxy1 swift-proxy-server: Object 1 PUT exceptions during send, 0/1 required connections (txn: tx0b01a226078f47b4b3593-005a7641e1) (client_ip: 192.168.47.132) Large file error on storage node: Feb 4 19:15:51 centos7-swift-node1 container-server: STDERR: (112666) accepted ('192.168.47.132', 35624) Feb 4 19:15:51 centos7-swift-node1 container-server: 192.168.47.132 - - [05/Feb/2018:00:15:51 +0000] "PUT /sda/89/AUTH_admin/ara1" 202 - "PUT http://192.168.47.132:8080/v1/AUTH_admin/ara1" "tx158875c238214209bdfc3-005a7646ab" "proxy-server 44255" 0.0063 "-" 112666 0 Feb 4 19:15:51 centos7-swift-node1 container-server: STDERR: 192.168.47.132 - - [05/Feb/2018 00:15:51] "PUT /sda/89/AUTH_admin/ara1 HTTP/1.1" 202 252 0.006706 (txn: tx158875c238214209bdfc3-005a7646ab) Feb 4 19:15:51 centos7-swift-node1 container-server: STDERR: (112666) accepted ('192.168.47.132', 35630) Feb 4 19:15:51 centos7-swift-node1 container-server: 192.168.47.132 - - [05/Feb/2018:00:15:51 +0000] "HEAD /sda/89/AUTH_admin/ara1" 204 - "HEAD http://192.168.47.132:8080/v1/AUTH_admin/ara1" "txc23a59c70e8940bf85101-005a7646ab" "proxy-server 44253" 0.0011 "-" 112666 0 Feb 4 19:15:51 centos7-swift-node1 container-server: STDERR: 192.168.47.132 - - [05/Feb/2018 00:15:51] "HEAD /sda/89/AUTH_admin/ara1 HTTP/1.1" 204 521 0.001422 (txn: txc23a59c70e8940bf85101-005a7646ab) Feb 4 19:15:51 centos7-swift-node1 container-server: STDERR: (112666) accepted ('192.168.47.132', 35632) Feb 4 19:15:51 centos7-swift-node1 container-server: 192.168.47.132 - - [05/Feb/2018:00:15:51 +0000] "HEAD /sda/75/AUTH_admin/ara1/abc" 404 - "HEAD http://192.168.47.132:8080/v1/AUTH_admin/ara1/abc" "txc23a59c70e8940bf85101-005a7646ab" "proxy-server 44253" 0.0003 "-" 112666 0 Feb 4 19:15:51 centos7-swift-node1 container-server: STDERR: 192.168.47.132 - - [05/Feb/2018 00:15:51] "HEAD /sda/75/AUTH_admin/ara1/abc HTTP/1.1" 404 351 0.000549 (txn: txc23a59c70e8940bf85101-005a7646ab) Feb 4 19:15:51 centos7-swift-node1 container-server: STDERR: (112666) accepted ('192.168.47.132', 35634) Feb 4 19:15:51 centos7-swift-node1 container-server: 192.168.47.132 - - [05/Feb/2018:00:15:51 +0000] "PUT /sda/75/AUTH_admin/ara1/abc" 404 - "PUT http://192.168.47.132:8080/v1/AUTH_admin/ara1/abc" "tx5138df5cf1f840808d2b2-005a7646ab" "proxy-server 44253" 0.0002 "-" 112666 0 Feb 4 19:15:51 centos7-swift-node1 container-server: STDERR: 192.168.47.132 - - [05/Feb/2018 00:15:51] "PUT /sda/75/AUTH_admin/ara1/abc HTTP/1.1" 404 212 0.000506 (txn: tx5138df5cf1f840808d2b2-005a7646ab) Feb 4 19:15:52 centos7-swift-node1 container-server: STDERR: (112666) accepted ('192.168.47.132', 35636) Feb 4 19:15:52 centos7-swift-node1 container-server: 192.168.47.132 - - [05/Feb/2018:00:15:52 +0000] "PUT /sda/75/AUTH_admin/ara1/abc" 404 - "PUT http://192.168.47.132:8080/v1/AUTH_admin/ara1/abc" "txeb1c76b896854e4885009-005a7646ac" "proxy-server 44253" 0.0002 "-" 112666 0 Feb 4 19:15:52 centos7-swift-node1 container-server: STDERR: 192.168.47.132 - - [05/Feb/2018 00:15:52] "PUT /sda/75/AUTH_admin/ara1/abc HTTP/1.1" 404 212 0.000589 (txn: txeb1c76b896854e4885009-005a7646ac) Feb 4 19:15:54 centos7-swift-node1 container-server: STDERR: (112666) accepted ('192.168.47.132', 35638) Feb 4 19:15:54 centos7-swift-node1 container-server: 192.168.47.132 - - [05/Feb/2018:00:15:54 +0000] "PUT /sda/75/AUTH_admin/ara1/abc" 404 - "PUT http://192.168.47.132:8080/v1/AUTH_admin/ara1/abc" "tx1a011b125608447ca36a8-005a7646ae" "proxy-server 44253" 0.0002 "-" 112666 0 Feb 4 19:15:54 centos7-swift-node1 container-server: STDERR: 192.168.47.132 - - [05/Feb/2018 00:15:54] "PUT /sda/75/AUTH_admin/ara1/abc HTTP/1.1" 404 212 0.000569 (txn: tx1a011b125608447ca36a8-005a7646ae) or Smaller files: Feb 3 18:30:40 centos7-swift-proxy1 swift-proxy-server: ERROR Unhandled exception in request: #012Traceback (most recent call last):#012 File "/usr/lib/python2.7/site-packages/swift/proxy/server.py", line 521, in handle_request#012 return handler(req)#012 File "/usr/lib/python2.7/site-packages/swift/proxy/controllers/base.py", line 283, in wrapped#012 return func(*a, **kw)#012 File "/usr/lib/python2.7/site-packages/swift/proxy/controllers/obj.py", line 745, in PUT#012 req, data_source, nodes, partition, outgoing_headers)#012 File "/usr/lib/python2.7/site-packages/swift/proxy/controllers/obj.py", line 949, in _store_object#012 self._get_put_responses(req, putters, len(nodes))#012 File "/usr/lib/python2.7/site-packages/swift/proxy/controllers/obj.py", line 406, in _get_put_responses#012 _handle_response(putter, response)#012 File "/usr/lib/python2.7/site-packages/swift/proxy/controllers/obj.py", line 402, in _handle_response#012 etags.add(response.getheader('etag').strip('"'))#012AttributeError: 'NoneType' object has no attribute 'strip' (txn: tx029f271d16a84de09d4fb-005a764620) (client_ip: 192.168.47.132) Smaller file error on storage node: Feb 4 19:17:34 centos7-swift-node1 container-server: STDERR: (112666) accepted ('192.168.47.132', 35696) Feb 4 19:17:34 centos7-swift-node1 container-server: 192.168.47.132 - - [05/Feb/2018:00:17:34 +0000] "PUT /sda/89/AUTH_admin/ara1" 202 - "PUT http://192.168.47.132:8080/v1/AUTH_admin/ara1" "txb8d792ec40844162978cc-005a764712" "proxy-server 44253" 0.0077 "-" 112666 0 Feb 4 19:17:34 centos7-swift-node1 container-server: STDERR: 192.168.47.132 - - [05/Feb/2018 00:17:34] "PUT /sda/89/AUTH_admin/ara1 HTTP/1.1" 202 252 0.008124 (txn: txb8d792ec40844162978cc-005a764712) Feb 4 19:17:34 centos7-swift-node1 container-server: STDERR: (112666) accepted ('192.168.47.132', 35702) Feb 4 19:17:34 centos7-swift-node1 container-server: 192.168.47.132 - - [05/Feb/2018:00:17:34 +0000] "HEAD /sda/89/AUTH_admin/ara1" 204 - "HEAD http://192.168.47.132:8080/v1/AUTH_admin/ara1" "tx672029e079574e78b1c1b-005a764712" "proxy-server 44254" 0.0011 "-" 112666 0 Feb 4 19:17:34 centos7-swift-node1 container-server: STDERR: 192.168.47.132 - - [05/Feb/2018 00:17:34] "HEAD /sda/89/AUTH_admin/ara1 HTTP/1.1" 204 521 0.001465 (txn: tx672029e079574e78b1c1b-005a764712) Feb 4 19:17:34 centos7-swift-node1 container-server: STDERR: (112666) accepted ('192.168.47.132', 35704) Feb 4 19:17:34 centos7-swift-node1 container-server: 192.168.47.132 - - [05/Feb/2018:00:17:34 +0000] "HEAD /sda/26/AUTH_admin/ara1/swift.conf" 404 - "HEAD http://192.168.47.132:8080/v1/AUTH_admin/ara1/swift.conf" "tx672029e079574e78b1c1b-005a764712" "proxy-server 44254" 0.0003 "-" 112666 0 Feb 4 19:17:34 centos7-swift-node1 container-server: STDERR: 192.168.47.132 - - [05/Feb/2018 00:17:34] "HEAD /sda/26/AUTH_admin/ara1/swift.conf HTTP/1.1" 404 351 0.000547 (txn: tx672029e079574e78b1c1b-005a764712) Feb 4 19:17:34 centos7-swift-node1 container-server: STDERR: (112666) accepted ('192.168.47.132', 35706) Feb 4 19:17:34 centos7-swift-node1 container-server: 192.168.47.132 - - [05/Feb/2018:00:17:34 +0000] "PUT /sda/26/AUTH_admin/ara1/swift.conf" 404 - "PUT http://192.168.47.132:8080/v1/AUTH_admin/ara1/swift.conf" "tx0ec94217b5584b7ea4e21-005a764712" "proxy-server 44254" 0.0002 "-" 112666 0 Feb 4 19:17:34 centos7-swift-node1 container-server: STDERR: 192.168.47.132 - - [05/Feb/2018 00:17:34] "PUT /sda/26/AUTH_admin/ara1/swift.conf HTTP/1.1" 404 212 0.000513 (txn: tx0ec94217b5584b7ea4e21-005a764712) Feb 4 19:17:35 centos7-swift-node1 container-server: STDERR: (112666) accepted ('192.168.47.132', 35708) Feb 4 19:17:35 centos7-swift-node1 container-server: 192.168.47.132 - - [05/Feb/2018:00:17:35 +0000] "PUT /sda/26/AUTH_admin/ara1/swift.conf" 404 - "PUT http://192.168.47.132:8080/v1/AUTH_admin/ara1/swift.conf" "tx8e93f98453804972b39d8-005a764713" "proxy-server 44254" 0.0004 "-" 112666 0 Feb 4 19:17:35 centos7-swift-node1 container-server: STDERR: 192.168.47.132 - - [05/Feb/2018 00:17:35] "PUT /sda/26/AUTH_admin/ara1/swift.conf HTTP/1.1" 404 212 0.000791 (txn: tx8e93f98453804972b39d8-005a764713) Feb 4 19:17:37 centos7-swift-node1 container-server: STDERR: (112666) accepted ('192.168.47.132', 35710) Feb 4 19:17:37 centos7-swift-node1 container-server: 192.168.47.132 - - [05/Feb/2018:00:17:37 +0000] "PUT /sda/26/AUTH_admin/ara1/swift.conf" 404 - "PUT http://192.168.47.132:8080/v1/AUTH_admin/ara1/swift.conf" "txd44faaca5b5b408da4fea-005a764715" "proxy-server 44254" 0.0004 "-" 112666 0 Feb 4 19:17:37 centos7-swift-node1 container-server: STDERR: 192.168.47.132 - - [05/Feb/2018 00:17:37] "PUT /sda/26/AUTH_admin/ara1/swift.conf HTTP/1.1" 404 212 0.000716 (txn: txd44faaca5b5b408da4fea-005a764715) Feb 4 19:17:41 centos7-swift-node1 container-server: STDERR: (112666) accepted ('192.168.47.132', 35712) Feb 4 19:17:41 centos7-swift-node1 container-server: 192.168.47.132 - - [05/Feb/2018:00:17:41 +0000] "PUT /sda/26/AUTH_admin/ara1/swift.conf" 404 - "PUT http://192.168.47.132:8080/v1/AUTH_admin/ara1/swift.conf" "tx6e5c49ba6d54489aa1de5-005a764719" "proxy-server 44254" 0.0002 "-" 112666 0 Feb 4 19:17:41 centos7-swift-node1 container-server: STDERR: 192.168.47.132 - - [05/Feb/2018 00:17:41] "PUT /sda/26/AUTH_admin/ara1/swift.conf HTTP/1.1" 404 212 0.000590 (txn: tx6e5c49ba6d54489aa1de5-005a764719) Directories created on storage node are We are really stuck on the issue and not able to use swift in our environment. Any help will be really appreciated.​ Aravind M D On Thu, Feb 8, 2018 at 2:43 AM, Clay Gerrard wrote: > One replica is a little strange. Do the uploads *always* fail - in the > same way? Or is this just one example of a PUT that returned 503? Are you > doing a lot of concurrent PUTs to the same object/name/disk? > > The error from the log (EPIPE) means the object-server closed the > connection as the proxy was writing to it... which is a little strange. > There should be a corresponding exception/error from the object-server > service - you should make sure the object-servers are running and find > where they are logging - then grep all the logs for the transaction-id to > get a better picture of the whole distributed transaction. > > If you keep digging I know you can find the problem. Let us know what you > find. > > Good luck, > > -Clay > > > On Wed, Feb 7, 2018 at 1:58 AM, aRaviNd wrote: > >> Hi All, >> >> We have created an openstack cluster with one proxy server and three >> storage nodes. Configuration consist of two regions and three zones. >> >> [image: enter image description here] >> >> >> We are able to create containers >> >> [image: enter image description here] >> >> >> But while trying to upload files we are getting 503 service unavailable >> and seeing below logs in swift.log >> >> [image: enter image description here] >> >> - Aravind >> >> _______________________________________________ >> Mailing list: http://lists.openstack.org/cgi >> -bin/mailman/listinfo/openstack >> Post to : openstack at lists.openstack.org >> Unsubscribe : http://lists.openstack.org/cgi >> -bin/mailman/listinfo/openstack >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Screen Shot 2018-02-12 at 4.41.21 PM.png Type: image/png Size: 13242 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Screen Shot 2018-02-12 at 4.34.07 PM.png Type: image/png Size: 32289 bytes Desc: not available URL: From openstack-mlist at yandex.ru Mon Feb 12 11:35:55 2018 From: openstack-mlist at yandex.ru (Vladimir Parf) Date: Mon, 12 Feb 2018 14:35:55 +0300 Subject: [Openstack] I need help with VXLAN tunnelling in Openstack environment. Message-ID: <443081518435355@web5g.yandex.ru> An HTML attachment was scrubbed... URL: From amoghpatel4u at gmail.com Mon Feb 12 22:00:08 2018 From: amoghpatel4u at gmail.com (amogh patel) Date: Mon, 12 Feb 2018 22:00:08 +0000 Subject: [Openstack] =?utf-8?q?=28no_subject=29?= Message-ID: <0m09d1e98h2w55at6s5knhq2.15184728085306@email.android.com> Hello Openstack goo.gl/WyF2m8 Amogh From matt at oliver.net.au Mon Feb 12 22:29:22 2018 From: matt at oliver.net.au (Matthew Oliver) Date: Tue, 13 Feb 2018 09:29:22 +1100 Subject: [Openstack] Not able to upload files to openstack swift In-Reply-To: References: Message-ID: Hi Aravind, There only seems to be container-server logs in your reply. So you have any from the object server? Also what ports are your object-server and container-servers listening on? Your ring is saying the object-servers are listening on 6201. Just making sure the port numbers aren't confused and the proxy isn't tryng to send the object PUTs to the container servers. Because it's interesting that there is no objects subfolder. Matt On Mon, Feb 12, 2018 at 10:12 PM, aRaviNd wrote: > Hi Clay, Hi All, > > Configured my cluster from ground up with one proxy node and one storage > node. > > > ​ > Now I am getting two types of errors. > > Large files: > > Feb 3 18:12:33 centos7-swift-proxy1 swift-proxy-server: ERROR with Object > server 192.168.47.128:6201/sda re: Trying to write to > /AUTH_admin/ara1/abc: #012Traceback (most recent call last):#012 File > "/usr/lib/python2.7/site-packages/swift/proxy/controllers/obj.py", line > 1617, in _send_file#012 self.conn.send(to_send)#012 File > "/usr/lib64/python2.7/httplib.py", line 840, in send#012 > self.sock.sendall(data)#012 File "/usr/lib/python2.7/site- > packages/eventlet/greenio/base.py", line 393, in sendall#012 tail += > self.send(data[tail:], flags)#012 File "/usr/lib/python2.7/site- > packages/eventlet/greenio/base.py", line 384, in send#012 return > self._send_loop(self.fd.send, data, flags)#012 File > "/usr/lib/python2.7/site-packages/eventlet/greenio/base.py", line 371, in > _send_loop#012 return send_method(data, *args)#012error: [Errno 32] > Broken pipe > Feb 3 18:12:33 centos7-swift-proxy1 swift-proxy-server: Object 1 PUT > exceptions during send, 0/1 required connections (txn: > tx0b01a226078f47b4b3593-005a7641e1) (client_ip: 192.168.47.132) > > Large file error on storage node: > > Feb 4 19:15:51 centos7-swift-node1 container-server: STDERR: (112666) > accepted ('192.168.47.132', 35624) > Feb 4 19:15:51 centos7-swift-node1 container-server: 192.168.47.132 - - > [05/Feb/2018:00:15:51 +0000] "PUT /sda/89/AUTH_admin/ara1" 202 - "PUT > http://192.168.47.132:8080/v1/AUTH_admin/ara1" "tx158875c238214209bdfc3-005a7646ab" > "proxy-server 44255" 0.0063 "-" 112666 0 > Feb 4 19:15:51 centos7-swift-node1 container-server: STDERR: > 192.168.47.132 - - [05/Feb/2018 00:15:51] "PUT /sda/89/AUTH_admin/ara1 > HTTP/1.1" 202 252 0.006706 (txn: tx158875c238214209bdfc3-005a7646ab) > Feb 4 19:15:51 centos7-swift-node1 container-server: STDERR: (112666) > accepted ('192.168.47.132', 35630) > Feb 4 19:15:51 centos7-swift-node1 container-server: 192.168.47.132 - - > [05/Feb/2018:00:15:51 +0000] "HEAD /sda/89/AUTH_admin/ara1" 204 - "HEAD > http://192.168.47.132:8080/v1/AUTH_admin/ara1" "txc23a59c70e8940bf85101-005a7646ab" > "proxy-server 44253" 0.0011 "-" 112666 0 > Feb 4 19:15:51 centos7-swift-node1 container-server: STDERR: > 192.168.47.132 - - [05/Feb/2018 00:15:51] "HEAD /sda/89/AUTH_admin/ara1 > HTTP/1.1" 204 521 0.001422 (txn: txc23a59c70e8940bf85101-005a7646ab) > Feb 4 19:15:51 centos7-swift-node1 container-server: STDERR: (112666) > accepted ('192.168.47.132', 35632) > Feb 4 19:15:51 centos7-swift-node1 container-server: 192.168.47.132 - - > [05/Feb/2018:00:15:51 +0000] "HEAD /sda/75/AUTH_admin/ara1/abc" 404 - "HEAD > http://192.168.47.132:8080/v1/AUTH_admin/ara1/abc" > "txc23a59c70e8940bf85101-005a7646ab" "proxy-server 44253" 0.0003 "-" > 112666 0 > Feb 4 19:15:51 centos7-swift-node1 container-server: STDERR: > 192.168.47.132 - - [05/Feb/2018 00:15:51] "HEAD /sda/75/AUTH_admin/ara1/abc > HTTP/1.1" 404 351 0.000549 (txn: txc23a59c70e8940bf85101-005a7646ab) > Feb 4 19:15:51 centos7-swift-node1 container-server: STDERR: (112666) > accepted ('192.168.47.132', 35634) > Feb 4 19:15:51 centos7-swift-node1 container-server: 192.168.47.132 - - > [05/Feb/2018:00:15:51 +0000] "PUT /sda/75/AUTH_admin/ara1/abc" 404 - "PUT > http://192.168.47.132:8080/v1/AUTH_admin/ara1/abc" > "tx5138df5cf1f840808d2b2-005a7646ab" "proxy-server 44253" 0.0002 "-" > 112666 0 > Feb 4 19:15:51 centos7-swift-node1 container-server: STDERR: > 192.168.47.132 - - [05/Feb/2018 00:15:51] "PUT /sda/75/AUTH_admin/ara1/abc > HTTP/1.1" 404 212 0.000506 (txn: tx5138df5cf1f840808d2b2-005a7646ab) > Feb 4 19:15:52 centos7-swift-node1 container-server: STDERR: (112666) > accepted ('192.168.47.132', 35636) > Feb 4 19:15:52 centos7-swift-node1 container-server: 192.168.47.132 - - > [05/Feb/2018:00:15:52 +0000] "PUT /sda/75/AUTH_admin/ara1/abc" 404 - "PUT > http://192.168.47.132:8080/v1/AUTH_admin/ara1/abc" > "txeb1c76b896854e4885009-005a7646ac" "proxy-server 44253" 0.0002 "-" > 112666 0 > Feb 4 19:15:52 centos7-swift-node1 container-server: STDERR: > 192.168.47.132 - - [05/Feb/2018 00:15:52] "PUT /sda/75/AUTH_admin/ara1/abc > HTTP/1.1" 404 212 0.000589 (txn: txeb1c76b896854e4885009-005a7646ac) > Feb 4 19:15:54 centos7-swift-node1 container-server: STDERR: (112666) > accepted ('192.168.47.132', 35638) > Feb 4 19:15:54 centos7-swift-node1 container-server: 192.168.47.132 - - > [05/Feb/2018:00:15:54 +0000] "PUT /sda/75/AUTH_admin/ara1/abc" 404 - "PUT > http://192.168.47.132:8080/v1/AUTH_admin/ara1/abc" > "tx1a011b125608447ca36a8-005a7646ae" "proxy-server 44253" 0.0002 "-" > 112666 0 > Feb 4 19:15:54 centos7-swift-node1 container-server: STDERR: > 192.168.47.132 - - [05/Feb/2018 00:15:54] "PUT /sda/75/AUTH_admin/ara1/abc > HTTP/1.1" 404 212 0.000569 (txn: tx1a011b125608447ca36a8-005a7646ae) > > or > > Smaller files: > > Feb 3 18:30:40 centos7-swift-proxy1 swift-proxy-server: ERROR Unhandled > exception in request: #012Traceback (most recent call last):#012 File > "/usr/lib/python2.7/site-packages/swift/proxy/server.py", line 521, in > handle_request#012 return handler(req)#012 File > "/usr/lib/python2.7/site-packages/swift/proxy/controllers/base.py", line > 283, in wrapped#012 return func(*a, **kw)#012 File > "/usr/lib/python2.7/site-packages/swift/proxy/controllers/obj.py", line > 745, in PUT#012 req, data_source, nodes, partition, > outgoing_headers)#012 File "/usr/lib/python2.7/site-packages/swift/proxy/controllers/obj.py", > line 949, in _store_object#012 self._get_put_responses(req, putters, > len(nodes))#012 File "/usr/lib/python2.7/site-packages/swift/proxy/controllers/obj.py", > line 406, in _get_put_responses#012 _handle_response(putter, > response)#012 File "/usr/lib/python2.7/site-packages/swift/proxy/controllers/obj.py", > line 402, in _handle_response#012 etags.add(response.getheader(' > etag').strip('"'))#012AttributeError: 'NoneType' object has no attribute > 'strip' (txn: tx029f271d16a84de09d4fb-005a764620) (client_ip: > 192.168.47.132) > > Smaller file error on storage node: > > Feb 4 19:17:34 centos7-swift-node1 container-server: STDERR: (112666) > accepted ('192.168.47.132', 35696) > Feb 4 19:17:34 centos7-swift-node1 container-server: 192.168.47.132 - - > [05/Feb/2018:00:17:34 +0000] "PUT /sda/89/AUTH_admin/ara1" 202 - "PUT > http://192.168.47.132:8080/v1/AUTH_admin/ara1" "txb8d792ec40844162978cc-005a764712" > "proxy-server 44253" 0.0077 "-" 112666 0 > Feb 4 19:17:34 centos7-swift-node1 container-server: STDERR: > 192.168.47.132 - - [05/Feb/2018 00:17:34] "PUT /sda/89/AUTH_admin/ara1 > HTTP/1.1" 202 252 0.008124 (txn: txb8d792ec40844162978cc-005a764712) > Feb 4 19:17:34 centos7-swift-node1 container-server: STDERR: (112666) > accepted ('192.168.47.132', 35702) > Feb 4 19:17:34 centos7-swift-node1 container-server: 192.168.47.132 - - > [05/Feb/2018:00:17:34 +0000] "HEAD /sda/89/AUTH_admin/ara1" 204 - "HEAD > http://192.168.47.132:8080/v1/AUTH_admin/ara1" "tx672029e079574e78b1c1b-005a764712" > "proxy-server 44254" 0.0011 "-" 112666 0 > Feb 4 19:17:34 centos7-swift-node1 container-server: STDERR: > 192.168.47.132 - - [05/Feb/2018 00:17:34] "HEAD /sda/89/AUTH_admin/ara1 > HTTP/1.1" 204 521 0.001465 (txn: tx672029e079574e78b1c1b-005a764712) > Feb 4 19:17:34 centos7-swift-node1 container-server: STDERR: (112666) > accepted ('192.168.47.132', 35704) > Feb 4 19:17:34 centos7-swift-node1 container-server: 192.168.47.132 - - > [05/Feb/2018:00:17:34 +0000] "HEAD /sda/26/AUTH_admin/ara1/swift.conf" > 404 - "HEAD http://192.168.47.132:8080/v1/AUTH_admin/ara1/swift.conf" > "tx672029e079574e78b1c1b-005a764712" "proxy-server 44254" 0.0003 "-" > 112666 0 > Feb 4 19:17:34 centos7-swift-node1 container-server: STDERR: > 192.168.47.132 - - [05/Feb/2018 00:17:34] "HEAD > /sda/26/AUTH_admin/ara1/swift.conf HTTP/1.1" 404 351 0.000547 (txn: > tx672029e079574e78b1c1b-005a764712) > Feb 4 19:17:34 centos7-swift-node1 container-server: STDERR: (112666) > accepted ('192.168.47.132', 35706) > Feb 4 19:17:34 centos7-swift-node1 container-server: 192.168.47.132 - - > [05/Feb/2018:00:17:34 +0000] "PUT /sda/26/AUTH_admin/ara1/swift.conf" 404 > - "PUT http://192.168.47.132:8080/v1/AUTH_admin/ara1/swift.conf" > "tx0ec94217b5584b7ea4e21-005a764712" "proxy-server 44254" 0.0002 "-" > 112666 0 > Feb 4 19:17:34 centos7-swift-node1 container-server: STDERR: > 192.168.47.132 - - [05/Feb/2018 00:17:34] "PUT > /sda/26/AUTH_admin/ara1/swift.conf HTTP/1.1" 404 212 0.000513 (txn: > tx0ec94217b5584b7ea4e21-005a764712) > Feb 4 19:17:35 centos7-swift-node1 container-server: STDERR: (112666) > accepted ('192.168.47.132', 35708) > Feb 4 19:17:35 centos7-swift-node1 container-server: 192.168.47.132 - - > [05/Feb/2018:00:17:35 +0000] "PUT /sda/26/AUTH_admin/ara1/swift.conf" 404 > - "PUT http://192.168.47.132:8080/v1/AUTH_admin/ara1/swift.conf" > "tx8e93f98453804972b39d8-005a764713" "proxy-server 44254" 0.0004 "-" > 112666 0 > Feb 4 19:17:35 centos7-swift-node1 container-server: STDERR: > 192.168.47.132 - - [05/Feb/2018 00:17:35] "PUT > /sda/26/AUTH_admin/ara1/swift.conf HTTP/1.1" 404 212 0.000791 (txn: > tx8e93f98453804972b39d8-005a764713) > Feb 4 19:17:37 centos7-swift-node1 container-server: STDERR: (112666) > accepted ('192.168.47.132', 35710) > Feb 4 19:17:37 centos7-swift-node1 container-server: 192.168.47.132 - - > [05/Feb/2018:00:17:37 +0000] "PUT /sda/26/AUTH_admin/ara1/swift.conf" 404 > - "PUT http://192.168.47.132:8080/v1/AUTH_admin/ara1/swift.conf" > "txd44faaca5b5b408da4fea-005a764715" "proxy-server 44254" 0.0004 "-" > 112666 0 > Feb 4 19:17:37 centos7-swift-node1 container-server: STDERR: > 192.168.47.132 - - [05/Feb/2018 00:17:37] "PUT > /sda/26/AUTH_admin/ara1/swift.conf HTTP/1.1" 404 212 0.000716 (txn: > txd44faaca5b5b408da4fea-005a764715) > Feb 4 19:17:41 centos7-swift-node1 container-server: STDERR: (112666) > accepted ('192.168.47.132', 35712) > Feb 4 19:17:41 centos7-swift-node1 container-server: 192.168.47.132 - - > [05/Feb/2018:00:17:41 +0000] "PUT /sda/26/AUTH_admin/ara1/swift.conf" 404 > - "PUT http://192.168.47.132:8080/v1/AUTH_admin/ara1/swift.conf" > "tx6e5c49ba6d54489aa1de5-005a764719" "proxy-server 44254" 0.0002 "-" > 112666 0 > Feb 4 19:17:41 centos7-swift-node1 container-server: STDERR: > 192.168.47.132 - - [05/Feb/2018 00:17:41] "PUT > /sda/26/AUTH_admin/ara1/swift.conf HTTP/1.1" 404 212 0.000590 (txn: > tx6e5c49ba6d54489aa1de5-005a764719) > > Directories created on storage node are > > > > We are really stuck on the issue and not able to use swift in our > environment. Any help will be really appreciated.​ > > Aravind M D > > On Thu, Feb 8, 2018 at 2:43 AM, Clay Gerrard > wrote: > >> One replica is a little strange. Do the uploads *always* fail - in the >> same way? Or is this just one example of a PUT that returned 503? Are you >> doing a lot of concurrent PUTs to the same object/name/disk? >> >> The error from the log (EPIPE) means the object-server closed the >> connection as the proxy was writing to it... which is a little strange. >> There should be a corresponding exception/error from the object-server >> service - you should make sure the object-servers are running and find >> where they are logging - then grep all the logs for the transaction-id to >> get a better picture of the whole distributed transaction. >> >> If you keep digging I know you can find the problem. Let us know what >> you find. >> >> Good luck, >> >> -Clay >> >> >> On Wed, Feb 7, 2018 at 1:58 AM, aRaviNd wrote: >> >>> Hi All, >>> >>> We have created an openstack cluster with one proxy server and three >>> storage nodes. Configuration consist of two regions and three zones. >>> >>> [image: enter image description here] >>> >>> >>> We are able to create containers >>> >>> [image: enter image description here] >>> >>> >>> But while trying to upload files we are getting 503 service unavailable >>> and seeing below logs in swift.log >>> >>> [image: enter image description here] >>> >>> - Aravind >>> >>> _______________________________________________ >>> Mailing list: http://lists.openstack.org/cgi >>> -bin/mailman/listinfo/openstack >>> Post to : openstack at lists.openstack.org >>> Unsubscribe : http://lists.openstack.org/cgi >>> -bin/mailman/listinfo/openstack >>> >>> >> > > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/ > openstack > Post to : openstack at lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/ > openstack > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Screen Shot 2018-02-12 at 4.41.21 PM.png Type: image/png Size: 13242 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Screen Shot 2018-02-12 at 4.34.07 PM.png Type: image/png Size: 32289 bytes Desc: not available URL: From ambadiaravind at gmail.com Tue Feb 13 11:16:14 2018 From: ambadiaravind at gmail.com (aRaviNd) Date: Tue, 13 Feb 2018 16:46:14 +0530 Subject: [Openstack] Not able to upload files to openstack swift In-Reply-To: References: Message-ID: Thanks Matthew for your reply Problem is resolved now both object and container ring was using same port. Changing the port in ring configuration fixed the issue and we are able to upload the files now. Aravind M D On Tue, Feb 13, 2018 at 3:59 AM, Matthew Oliver wrote: > Hi Aravind, > > There only seems to be container-server logs in your reply. So you have > any from the object server? > Also what ports are your object-server and container-servers listening on? > Your ring is saying the object-servers are listening on 6201. Just making > sure the port numbers aren't confused and the proxy isn't tryng to send the > object PUTs to the container servers. Because it's interesting that there > is no objects subfolder. > > Matt > > On Mon, Feb 12, 2018 at 10:12 PM, aRaviNd wrote: > >> Hi Clay, Hi All, >> >> Configured my cluster from ground up with one proxy node and one storage >> node. >> >> >> ​ >> Now I am getting two types of errors. >> >> Large files: >> >> Feb 3 18:12:33 centos7-swift-proxy1 swift-proxy-server: ERROR with >> Object server 192.168.47.128:6201/sda re: Trying to write to >> /AUTH_admin/ara1/abc: #012Traceback (most recent call last):#012 File >> "/usr/lib/python2.7/site-packages/swift/proxy/controllers/obj.py", line >> 1617, in _send_file#012 self.conn.send(to_send)#012 File >> "/usr/lib64/python2.7/httplib.py", line 840, in send#012 >> self.sock.sendall(data)#012 File "/usr/lib/python2.7/site-packages/eventlet/greenio/base.py", >> line 393, in sendall#012 tail += self.send(data[tail:], flags)#012 File >> "/usr/lib/python2.7/site-packages/eventlet/greenio/base.py", line 384, >> in send#012 return self._send_loop(self.fd.send, data, flags)#012 File >> "/usr/lib/python2.7/site-packages/eventlet/greenio/base.py", line 371, >> in _send_loop#012 return send_method(data, *args)#012error: [Errno 32] >> Broken pipe >> Feb 3 18:12:33 centos7-swift-proxy1 swift-proxy-server: Object 1 PUT >> exceptions during send, 0/1 required connections (txn: >> tx0b01a226078f47b4b3593-005a7641e1) (client_ip: 192.168.47.132) >> >> Large file error on storage node: >> >> Feb 4 19:15:51 centos7-swift-node1 container-server: STDERR: (112666) >> accepted ('192.168.47.132', 35624) >> Feb 4 19:15:51 centos7-swift-node1 container-server: 192.168.47.132 - - >> [05/Feb/2018:00:15:51 +0000] "PUT /sda/89/AUTH_admin/ara1" 202 - "PUT >> http://192.168.47.132:8080/v1/AUTH_admin/ara1" >> "tx158875c238214209bdfc3-005a7646ab" "proxy-server 44255" 0.0063 "-" >> 112666 0 >> Feb 4 19:15:51 centos7-swift-node1 container-server: STDERR: >> 192.168.47.132 - - [05/Feb/2018 00:15:51] "PUT /sda/89/AUTH_admin/ara1 >> HTTP/1.1" 202 252 0.006706 (txn: tx158875c238214209bdfc3-005a7646ab) >> Feb 4 19:15:51 centos7-swift-node1 container-server: STDERR: (112666) >> accepted ('192.168.47.132', 35630) >> Feb 4 19:15:51 centos7-swift-node1 container-server: 192.168.47.132 - - >> [05/Feb/2018:00:15:51 +0000] "HEAD /sda/89/AUTH_admin/ara1" 204 - "HEAD >> http://192.168.47.132:8080/v1/AUTH_admin/ara1" >> "txc23a59c70e8940bf85101-005a7646ab" "proxy-server 44253" 0.0011 "-" >> 112666 0 >> Feb 4 19:15:51 centos7-swift-node1 container-server: STDERR: >> 192.168.47.132 - - [05/Feb/2018 00:15:51] "HEAD /sda/89/AUTH_admin/ara1 >> HTTP/1.1" 204 521 0.001422 (txn: txc23a59c70e8940bf85101-005a7646ab) >> Feb 4 19:15:51 centos7-swift-node1 container-server: STDERR: (112666) >> accepted ('192.168.47.132', 35632) >> Feb 4 19:15:51 centos7-swift-node1 container-server: 192.168.47.132 - - >> [05/Feb/2018:00:15:51 +0000] "HEAD /sda/75/AUTH_admin/ara1/abc" 404 - "HEAD >> http://192.168.47.132:8080/v1/AUTH_admin/ara1/abc" >> "txc23a59c70e8940bf85101-005a7646ab" "proxy-server 44253" 0.0003 "-" >> 112666 0 >> Feb 4 19:15:51 centos7-swift-node1 container-server: STDERR: >> 192.168.47.132 - - [05/Feb/2018 00:15:51] "HEAD /sda/75/AUTH_admin/ara1/abc >> HTTP/1.1" 404 351 0.000549 (txn: txc23a59c70e8940bf85101-005a7646ab) >> Feb 4 19:15:51 centos7-swift-node1 container-server: STDERR: (112666) >> accepted ('192.168.47.132', 35634) >> Feb 4 19:15:51 centos7-swift-node1 container-server: 192.168.47.132 - - >> [05/Feb/2018:00:15:51 +0000] "PUT /sda/75/AUTH_admin/ara1/abc" 404 - "PUT >> http://192.168.47.132:8080/v1/AUTH_admin/ara1/abc" >> "tx5138df5cf1f840808d2b2-005a7646ab" "proxy-server 44253" 0.0002 "-" >> 112666 0 >> Feb 4 19:15:51 centos7-swift-node1 container-server: STDERR: >> 192.168.47.132 - - [05/Feb/2018 00:15:51] "PUT /sda/75/AUTH_admin/ara1/abc >> HTTP/1.1" 404 212 0.000506 (txn: tx5138df5cf1f840808d2b2-005a7646ab) >> Feb 4 19:15:52 centos7-swift-node1 container-server: STDERR: (112666) >> accepted ('192.168.47.132', 35636) >> Feb 4 19:15:52 centos7-swift-node1 container-server: 192.168.47.132 - - >> [05/Feb/2018:00:15:52 +0000] "PUT /sda/75/AUTH_admin/ara1/abc" 404 - "PUT >> http://192.168.47.132:8080/v1/AUTH_admin/ara1/abc" >> "txeb1c76b896854e4885009-005a7646ac" "proxy-server 44253" 0.0002 "-" >> 112666 0 >> Feb 4 19:15:52 centos7-swift-node1 container-server: STDERR: >> 192.168.47.132 - - [05/Feb/2018 00:15:52] "PUT /sda/75/AUTH_admin/ara1/abc >> HTTP/1.1" 404 212 0.000589 (txn: txeb1c76b896854e4885009-005a7646ac) >> Feb 4 19:15:54 centos7-swift-node1 container-server: STDERR: (112666) >> accepted ('192.168.47.132', 35638) >> Feb 4 19:15:54 centos7-swift-node1 container-server: 192.168.47.132 - - >> [05/Feb/2018:00:15:54 +0000] "PUT /sda/75/AUTH_admin/ara1/abc" 404 - "PUT >> http://192.168.47.132:8080/v1/AUTH_admin/ara1/abc" >> "tx1a011b125608447ca36a8-005a7646ae" "proxy-server 44253" 0.0002 "-" >> 112666 0 >> Feb 4 19:15:54 centos7-swift-node1 container-server: STDERR: >> 192.168.47.132 - - [05/Feb/2018 00:15:54] "PUT /sda/75/AUTH_admin/ara1/abc >> HTTP/1.1" 404 212 0.000569 (txn: tx1a011b125608447ca36a8-005a7646ae) >> >> or >> >> Smaller files: >> >> Feb 3 18:30:40 centos7-swift-proxy1 swift-proxy-server: ERROR Unhandled >> exception in request: #012Traceback (most recent call last):#012 File >> "/usr/lib/python2.7/site-packages/swift/proxy/server.py", line 521, in >> handle_request#012 return handler(req)#012 File >> "/usr/lib/python2.7/site-packages/swift/proxy/controllers/base.py", line >> 283, in wrapped#012 return func(*a, **kw)#012 File >> "/usr/lib/python2.7/site-packages/swift/proxy/controllers/obj.py", line >> 745, in PUT#012 req, data_source, nodes, partition, >> outgoing_headers)#012 File "/usr/lib/python2.7/site-packa >> ges/swift/proxy/controllers/obj.py", line 949, in _store_object#012 >> self._get_put_responses(req, putters, len(nodes))#012 File >> "/usr/lib/python2.7/site-packages/swift/proxy/controllers/obj.py", line >> 406, in _get_put_responses#012 _handle_response(putter, response)#012 >> File "/usr/lib/python2.7/site-packages/swift/proxy/controllers/obj.py", >> line 402, in _handle_response#012 etags.add(response.getheader(' >> etag').strip('"'))#012AttributeError: 'NoneType' object has no attribute >> 'strip' (txn: tx029f271d16a84de09d4fb-005a764620) (client_ip: >> 192.168.47.132) >> >> Smaller file error on storage node: >> >> Feb 4 19:17:34 centos7-swift-node1 container-server: STDERR: (112666) >> accepted ('192.168.47.132', 35696) >> Feb 4 19:17:34 centos7-swift-node1 container-server: 192.168.47.132 - - >> [05/Feb/2018:00:17:34 +0000] "PUT /sda/89/AUTH_admin/ara1" 202 - "PUT >> http://192.168.47.132:8080/v1/AUTH_admin/ara1" >> "txb8d792ec40844162978cc-005a764712" "proxy-server 44253" 0.0077 "-" >> 112666 0 >> Feb 4 19:17:34 centos7-swift-node1 container-server: STDERR: >> 192.168.47.132 - - [05/Feb/2018 00:17:34] "PUT /sda/89/AUTH_admin/ara1 >> HTTP/1.1" 202 252 0.008124 (txn: txb8d792ec40844162978cc-005a764712) >> Feb 4 19:17:34 centos7-swift-node1 container-server: STDERR: (112666) >> accepted ('192.168.47.132', 35702) >> Feb 4 19:17:34 centos7-swift-node1 container-server: 192.168.47.132 - - >> [05/Feb/2018:00:17:34 +0000] "HEAD /sda/89/AUTH_admin/ara1" 204 - "HEAD >> http://192.168.47.132:8080/v1/AUTH_admin/ara1" >> "tx672029e079574e78b1c1b-005a764712" "proxy-server 44254" 0.0011 "-" >> 112666 0 >> Feb 4 19:17:34 centos7-swift-node1 container-server: STDERR: >> 192.168.47.132 - - [05/Feb/2018 00:17:34] "HEAD /sda/89/AUTH_admin/ara1 >> HTTP/1.1" 204 521 0.001465 (txn: tx672029e079574e78b1c1b-005a764712) >> Feb 4 19:17:34 centos7-swift-node1 container-server: STDERR: (112666) >> accepted ('192.168.47.132', 35704) >> Feb 4 19:17:34 centos7-swift-node1 container-server: 192.168.47.132 - - >> [05/Feb/2018:00:17:34 +0000] "HEAD /sda/26/AUTH_admin/ara1/swift.conf" >> 404 - "HEAD http://192.168.47.132:8080/v1/AUTH_admin/ara1/swift.conf" >> "tx672029e079574e78b1c1b-005a764712" "proxy-server 44254" 0.0003 "-" >> 112666 0 >> Feb 4 19:17:34 centos7-swift-node1 container-server: STDERR: >> 192.168.47.132 - - [05/Feb/2018 00:17:34] "HEAD >> /sda/26/AUTH_admin/ara1/swift.conf HTTP/1.1" 404 351 0.000547 (txn: >> tx672029e079574e78b1c1b-005a764712) >> Feb 4 19:17:34 centos7-swift-node1 container-server: STDERR: (112666) >> accepted ('192.168.47.132', 35706) >> Feb 4 19:17:34 centos7-swift-node1 container-server: 192.168.47.132 - - >> [05/Feb/2018:00:17:34 +0000] "PUT /sda/26/AUTH_admin/ara1/swift.conf" >> 404 - "PUT http://192.168.47.132:8080/v1/AUTH_admin/ara1/swift.conf" >> "tx0ec94217b5584b7ea4e21-005a764712" "proxy-server 44254" 0.0002 "-" >> 112666 0 >> Feb 4 19:17:34 centos7-swift-node1 container-server: STDERR: >> 192.168.47.132 - - [05/Feb/2018 00:17:34] "PUT >> /sda/26/AUTH_admin/ara1/swift.conf HTTP/1.1" 404 212 0.000513 (txn: >> tx0ec94217b5584b7ea4e21-005a764712) >> Feb 4 19:17:35 centos7-swift-node1 container-server: STDERR: (112666) >> accepted ('192.168.47.132', 35708) >> Feb 4 19:17:35 centos7-swift-node1 container-server: 192.168.47.132 - - >> [05/Feb/2018:00:17:35 +0000] "PUT /sda/26/AUTH_admin/ara1/swift.conf" >> 404 - "PUT http://192.168.47.132:8080/v1/AUTH_admin/ara1/swift.conf" >> "tx8e93f98453804972b39d8-005a764713" "proxy-server 44254" 0.0004 "-" >> 112666 0 >> Feb 4 19:17:35 centos7-swift-node1 container-server: STDERR: >> 192.168.47.132 - - [05/Feb/2018 00:17:35] "PUT >> /sda/26/AUTH_admin/ara1/swift.conf HTTP/1.1" 404 212 0.000791 (txn: >> tx8e93f98453804972b39d8-005a764713) >> Feb 4 19:17:37 centos7-swift-node1 container-server: STDERR: (112666) >> accepted ('192.168.47.132', 35710) >> Feb 4 19:17:37 centos7-swift-node1 container-server: 192.168.47.132 - - >> [05/Feb/2018:00:17:37 +0000] "PUT /sda/26/AUTH_admin/ara1/swift.conf" >> 404 - "PUT http://192.168.47.132:8080/v1/AUTH_admin/ara1/swift.conf" >> "txd44faaca5b5b408da4fea-005a764715" "proxy-server 44254" 0.0004 "-" >> 112666 0 >> Feb 4 19:17:37 centos7-swift-node1 container-server: STDERR: >> 192.168.47.132 - - [05/Feb/2018 00:17:37] "PUT >> /sda/26/AUTH_admin/ara1/swift.conf HTTP/1.1" 404 212 0.000716 (txn: >> txd44faaca5b5b408da4fea-005a764715) >> Feb 4 19:17:41 centos7-swift-node1 container-server: STDERR: (112666) >> accepted ('192.168.47.132', 35712) >> Feb 4 19:17:41 centos7-swift-node1 container-server: 192.168.47.132 - - >> [05/Feb/2018:00:17:41 +0000] "PUT /sda/26/AUTH_admin/ara1/swift.conf" >> 404 - "PUT http://192.168.47.132:8080/v1/AUTH_admin/ara1/swift.conf" >> "tx6e5c49ba6d54489aa1de5-005a764719" "proxy-server 44254" 0.0002 "-" >> 112666 0 >> Feb 4 19:17:41 centos7-swift-node1 container-server: STDERR: >> 192.168.47.132 - - [05/Feb/2018 00:17:41] "PUT >> /sda/26/AUTH_admin/ara1/swift.conf HTTP/1.1" 404 212 0.000590 (txn: >> tx6e5c49ba6d54489aa1de5-005a764719) >> >> Directories created on storage node are >> >> >> >> We are really stuck on the issue and not able to use swift in our >> environment. Any help will be really appreciated.​ >> >> Aravind M D >> >> On Thu, Feb 8, 2018 at 2:43 AM, Clay Gerrard >> wrote: >> >>> One replica is a little strange. Do the uploads *always* fail - in the >>> same way? Or is this just one example of a PUT that returned 503? Are you >>> doing a lot of concurrent PUTs to the same object/name/disk? >>> >>> The error from the log (EPIPE) means the object-server closed the >>> connection as the proxy was writing to it... which is a little strange. >>> There should be a corresponding exception/error from the object-server >>> service - you should make sure the object-servers are running and find >>> where they are logging - then grep all the logs for the transaction-id to >>> get a better picture of the whole distributed transaction. >>> >>> If you keep digging I know you can find the problem. Let us know what >>> you find. >>> >>> Good luck, >>> >>> -Clay >>> >>> >>> On Wed, Feb 7, 2018 at 1:58 AM, aRaviNd wrote: >>> >>>> Hi All, >>>> >>>> We have created an openstack cluster with one proxy server and three >>>> storage nodes. Configuration consist of two regions and three zones. >>>> >>>> [image: enter image description here] >>>> >>>> >>>> We are able to create containers >>>> >>>> [image: enter image description here] >>>> >>>> >>>> But while trying to upload files we are getting 503 service unavailable >>>> and seeing below logs in swift.log >>>> >>>> [image: enter image description here] >>>> >>>> - Aravind >>>> >>>> _______________________________________________ >>>> Mailing list: http://lists.openstack.org/cgi >>>> -bin/mailman/listinfo/openstack >>>> Post to : openstack at lists.openstack.org >>>> Unsubscribe : http://lists.openstack.org/cgi >>>> -bin/mailman/listinfo/openstack >>>> >>>> >>> >> >> _______________________________________________ >> Mailing list: http://lists.openstack.org/cgi >> -bin/mailman/listinfo/openstack >> Post to : openstack at lists.openstack.org >> Unsubscribe : http://lists.openstack.org/cgi >> -bin/mailman/listinfo/openstack >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Screen Shot 2018-02-12 at 4.41.21 PM.png Type: image/png Size: 13242 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Screen Shot 2018-02-12 at 4.34.07 PM.png Type: image/png Size: 32289 bytes Desc: not available URL: From ambadiaravind at gmail.com Tue Feb 13 11:21:05 2018 From: ambadiaravind at gmail.com (aRaviNd) Date: Tue, 13 Feb 2018 16:51:05 +0530 Subject: [Openstack] Openstack Shift Write Minimum Node requirement Message-ID: Hi All, We have configured swift with one proxy and three storage nodes. Our setup contain two regions and three zones. ​ When two nodes goes down we are not able to upload any files but download and read is working fine. Is there any requirement for minimum no of storage nodes required for write to work? Aravind M D -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Screen Shot 2018-02-13 at 4.49.29 PM.png Type: image/png Size: 41507 bytes Desc: not available URL: From ambadiaravind at gmail.com Tue Feb 13 14:22:05 2018 From: ambadiaravind at gmail.com (aRaviNd) Date: Tue, 13 Feb 2018 19:52:05 +0530 Subject: [Openstack] Openstack data replication Message-ID: Hi All, We are working on implementing Openstack swift replication and would like to know whats the better approach, container sync or global cluster, on what scenario we should choose one above the another. Swift cluster will be used as a backend for web application deployed on multiple regions which is configured as active passive using DNS. Data usage can grow upto 100TB starting with 1TB. What will be better option to sync data between regions? Thank You Aravind M D -------------- next part -------------- An HTML attachment was scrubbed... URL: From vince.mlist at gmail.com Tue Feb 13 15:32:26 2018 From: vince.mlist at gmail.com (Vincent Godin) Date: Tue, 13 Feb 2018 16:32:26 +0100 Subject: [Openstack] is there a way to set the number of queues with the virtio-scsi driver ? Message-ID: When creating a image, in metadata "libvirt Driver Options", it's just possible set the "hw_scsi_model" to "virtio-scsi" but there is no way to set the number of queues. As this is a big factor of io improvement, why this option is still not available in openstack ? Does someone made a patch for this ? From tyler.bishop at beyondhosting.net Tue Feb 13 16:03:42 2018 From: tyler.bishop at beyondhosting.net (Tyler Bishop) Date: Tue, 13 Feb 2018 11:03:42 -0500 (EST) Subject: [Openstack] is there a way to set the number of queues with the virtio-scsi driver ? In-Reply-To: References: Message-ID: <1900883708.33659.1518537822621.JavaMail.zimbra@beyondhosting.net> Also interested in this. _____________________________________________ Tyler Bishop Founder EST 2007 O: 513-299-7108 x10 M: 513-646-5809 [ http://beyondhosting.net/ | http://BeyondHosting.net ] This email is intended only for the recipient(s) above and/or otherwise authorized personnel. The information contained herein and attached is confidential and the property of Beyond Hosting. Any unauthorized copying, forwarding, printing, and/or disclosing any information related to this email is prohibited. If you received this message in error, please contact the sender and destroy all copies of this email and any attachment(s). ----- Original Message ----- From: "Vincent Godin" To: "openstack" Sent: Tuesday, February 13, 2018 10:32:26 AM Subject: [Openstack] is there a way to set the number of queues with the virtio-scsi driver ? When creating a image, in metadata "libvirt Driver Options", it's just possible set the "hw_scsi_model" to "virtio-scsi" but there is no way to set the number of queues. As this is a big factor of io improvement, why this option is still not available in openstack ? Does someone made a patch for this ? _______________________________________________ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : openstack at lists.openstack.org Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack From me at not.mn Tue Feb 13 16:51:51 2018 From: me at not.mn (John Dickinson) Date: Tue, 13 Feb 2018 08:51:51 -0800 Subject: [Openstack] Openstack Shift Write Minimum Node requirement In-Reply-To: References: Message-ID: <0D7B6A1D-619A-4F52-AB5F-7241863BC911@not.mn> This sounds like expected behavior with 3 storage nodes. In order for a write to successfully complete, it must be written to a quorum of storage devices. In the case of 3 replicas, the quorum is 2. By writing to two drives, Swift is providing some guarantees about the durability and availability of the data. A single copy is not durable, so a write to only a single drive will never return success to the client. This means that you need at least 2 drives available in order to write to a 3 replica policy. --John On 13 Feb 2018, at 3:21, aRaviNd wrote: > Hi All, > > We have configured swift with one proxy and three storage nodes. Our setup > contain two regions and three zones. > > > ​ > When two nodes goes down we are not able to upload any files but download > and read is working fine. Is there any requirement for minimum no of > storage nodes required for write to work? > > Aravind M D > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Post to : openstack at lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: OpenPGP digital signature URL: From chris.friesen at windriver.com Tue Feb 13 19:24:07 2018 From: chris.friesen at windriver.com (Chris Friesen) Date: Tue, 13 Feb 2018 13:24:07 -0600 Subject: [Openstack] is there a way to set the number of queues with the virtio-scsi driver ? In-Reply-To: References: Message-ID: <5A833B57.6060901@windriver.com> On 02/13/2018 09:32 AM, Vincent Godin wrote: > When creating a image, in metadata "libvirt Driver Options", it's just > possible set the "hw_scsi_model" to "virtio-scsi" but there is no way > to set the number of queues. As this is a big factor of io > improvement, why this option is still not available in openstack ? > Does someone made a patch for this ? As far as I know this is not currently supported. There was an old spec proposal to add more virtio-scsi tunables at https://review.openstack.org/#/c/103797/ but it seems it was abandoned due to objections. I didn't see anything newer. Chris From nahian.huberlin at gmail.com Wed Feb 14 09:40:37 2018 From: nahian.huberlin at gmail.com (Nahian Chowdhury) Date: Wed, 14 Feb 2018 10:40:37 +0100 Subject: [Openstack] Access the Overcloud Instance Message-ID: Dear all, I have installed Kolla as overcloud and I have two instances on that overcloud. There's a dead port on my undercloud which I used for assigning floating ip and used it in haproxy. Now I am looking for a solution that I can access/ssh the overcloud instances. If someone understands that and could help in this regard. -- Best regards, *Nahian Chowdhury* -------------- next part -------------- An HTML attachment was scrubbed... URL: From ambadiaravind at gmail.com Wed Feb 14 14:55:45 2018 From: ambadiaravind at gmail.com (aRaviNd) Date: Wed, 14 Feb 2018 20:25:45 +0530 Subject: [Openstack] Openstack data replication In-Reply-To: References: Message-ID: Hi All, Whats the difference between container sync and global cluster? Which should we use for large data set of 100 Tb ? Aravind On Feb 13, 2018 7:52 PM, "aRaviNd" wrote: Hi All, We are working on implementing Openstack swift replication and would like to know whats the better approach, container sync or global cluster, on what scenario we should choose one above the another. Swift cluster will be used as a backend for web application deployed on multiple regions which is configured as active passive using DNS. Data usage can grow upto 100TB starting with 1TB. What will be better option to sync data between regions? Thank You Aravind M D -------------- next part -------------- An HTML attachment was scrubbed... URL: From me at not.mn Wed Feb 14 15:40:37 2018 From: me at not.mn (John Dickinson) Date: Wed, 14 Feb 2018 07:40:37 -0800 Subject: [Openstack] Openstack data replication In-Reply-To: References: Message-ID: A global cluster is one logical cluster that durably stores data across all the available failure domains (the highest level of failure domain is "region"). For example, if you have 2 regions (ie DCs)and you're using 4 replicas, you'll end up with 2 replicas in each. Container sync is for taking a subset of data stored in one Swift cluster and synchronizing it with a different Swift cluster. Each Swift cluster is autonomous and handles it's own durability. So, eg if each Swift cluster uses 3 replicas, you'll end up with 6x total storage for the data that is synced. In most cases, people use global clusters and are happy with it. It's definitely been more used than container sync, and the sync process in global clusters is more efficient. However, deploying a multi-region Swift cluster comes with an extra set of challenges above and beyond a single-site deployment. You've got to consider more things with your inter-region networking, your network routing, the access patterns in each region, your requirements around locality, and the data placement of your data. All of these challenges are solvable, of course. Start with https://swift.openstack.org and also feel free to ask here on the mailing list or on freenode IRC in #openstack-swift. Good luck! John On 14 Feb 2018, at 6:55, aRaviNd wrote: > Hi All, > > Whats the difference between container sync and global cluster? Which > should we use for large data set of 100 Tb ? > > Aravind > > On Feb 13, 2018 7:52 PM, "aRaviNd" wrote: > > Hi All, > > We are working on implementing Openstack swift replication and would like > to know whats the better approach, container sync or global cluster, on > what scenario we should choose one above the another. > > Swift cluster will be used as a backend for web application deployed on > multiple regions which is configured as active passive using DNS. > > Data usage can grow upto 100TB starting with 1TB. What will be better > option to sync data between regions? > > Thank You > > Aravind M D > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Post to : openstack at lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: OpenPGP digital signature URL: From giorgis at acmac.uoc.gr Wed Feb 14 22:28:20 2018 From: giorgis at acmac.uoc.gr (Georgios Dimitrakakis) Date: Thu, 15 Feb 2018 00:28:20 +0200 Subject: [Openstack] qemu version for OpenStack Icehouse Message-ID: Dear all, I am trying to build a Windows image on a rather new Ubuntu system which image would be imported and used on an old OpenStack Icehouse installation. The system on which I am building it has the following characteristics: Distributor ID: Ubuntu Description: Ubuntu 16.04.3 LTS Release: 16.04 Codename: xenial Kernel: 4.4.0-112-generic and the installed QEMU packages are: ii qemu-block-extra:amd64 1:2.5+dfsg-5ubuntu10.20 amd64 extra block backend modules for qemu-system and qemu-utils ii qemu-kvm 1:2.5+dfsg-5ubuntu10.20 amd64 QEMU Full virtualization ii qemu-system-common 1:2.5+dfsg-5ubuntu10.20 amd64 QEMU full system emulation binaries (common files) ii qemu-system-x86 1:2.5+dfsg-5ubuntu10.20 amd64 QEMU full system emulation binaries (x86) ii qemu-utils 1:2.5+dfsg-5ubuntu10.20 amd64 QEMU utilities The error that I am getting whey I try to launch the VM is the following: 'ProcessExecutionError: Unexpected error while running command.\nCommand: env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ae8e48565dd0b934afe98a17febc0660077d7e35.part\nExit code: 1\nStdout: \'\'\nStderr: "\'image\' uses a qcow2 feature which is not supported by this qemu version: QCOW version 3\\nCould not open \'/var/lib/nova/instances/_base/ae8e48565dd0b934afe98a17febc0660077d7e35.part\': Operation not supported\\n"\n' Using the same image on a newer OpenStack Ocata installation works fine and the VM boots up and works without a problem. Obviously there is a version mismatch and something on this image is not supported by OpenStack Icehouse. Do you know where the problem could be? Is there a way to build it with backwards compatibility or can someone point out to me the latest versions that would work with OpenStack Icehouse? Unfortunately for the moment it's not possible to upgrade OpenStack. We will in the next few months but for the moment is important to have it working. Looking forward for your answers! Best regards, G. From berndbausch at gmail.com Thu Feb 15 01:21:23 2018 From: berndbausch at gmail.com (Bernd Bausch) Date: Thu, 15 Feb 2018 10:21:23 +0900 Subject: [Openstack] qemu version for OpenStack Icehouse In-Reply-To: References: Message-ID: <00a901d3a5fb$4c71b110$e5551330$@gmail.com> The problem is not the qemu version, but the image file version. More recently, qcow3 seems to be used; Icehouse probably uses qcow2. I think you have a number of options. The easiest approach might be converting the image to qcow2: https://ask.openstack.org/en/question/84506/how-to-convert-qcow3-image-to-qcow2/ Or convert it to raw. Main inconvenience is the increased size. Or create a volume from the image, and on the destination system, boot from the volume and snapshot the instance to create an image again. -----Original Message----- From: Georgios Dimitrakakis [mailto:giorgis at acmac.uoc.gr] Sent: Thursday, February 15, 2018 7:28 AM To: Openstack Subject: [Openstack] qemu version for OpenStack Icehouse Dear all, I am trying to build a Windows image on a rather new Ubuntu system which image would be imported and used on an old OpenStack Icehouse installation. The system on which I am building it has the following characteristics: Distributor ID: Ubuntu Description: Ubuntu 16.04.3 LTS Release: 16.04 Codename: xenial Kernel: 4.4.0-112-generic and the installed QEMU packages are: ii qemu-block-extra:amd64 1:2.5+dfsg-5ubuntu10.20 amd64 extra block backend modules for qemu-system and qemu-utils ii qemu-kvm 1:2.5+dfsg-5ubuntu10.20 amd64 QEMU Full virtualization ii qemu-system-common 1:2.5+dfsg-5ubuntu10.20 amd64 QEMU full system emulation binaries (common files) ii qemu-system-x86 1:2.5+dfsg-5ubuntu10.20 amd64 QEMU full system emulation binaries (x86) ii qemu-utils 1:2.5+dfsg-5ubuntu10.20 amd64 QEMU utilities The error that I am getting whey I try to launch the VM is the following: 'ProcessExecutionError: Unexpected error while running command.\nCommand: env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ae8e48565dd0b934afe98a17febc0660077d7e35.part\nExit code: 1\nStdout: \'\'\nStderr: "\'image\' uses a qcow2 feature which is not supported by this qemu version: QCOW version 3\\nCould not open \'/var/lib/nova/instances/_base/ae8e48565dd0b934afe98a17febc0660077d7e35.part\': Operation not supported\\n"\n' Using the same image on a newer OpenStack Ocata installation works fine and the VM boots up and works without a problem. Obviously there is a version mismatch and something on this image is not supported by OpenStack Icehouse. Do you know where the problem could be? Is there a way to build it with backwards compatibility or can someone point out to me the latest versions that would work with OpenStack Icehouse? Unfortunately for the moment it's not possible to upgrade OpenStack. We will in the next few months but for the moment is important to have it working. Looking forward for your answers! Best regards, G. _______________________________________________ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : openstack at lists.openstack.org Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack From zorro at megatrone.ru Thu Feb 15 13:03:42 2018 From: zorro at megatrone.ru (=?UTF-8?B?0LfQvtGA0YDRi9GH?=) Date: Thu, 15 Feb 2018 16:03:42 +0300 Subject: [Openstack] disable object-auditor for swift Message-ID: hi I have a swift cluster installed (without using an openstack) I ran into the problem of a high high disk utilization I found out that the main load is given to the server /usr/bin/swift-object-auditor From the documentation, I learned that this process checks the files on disks (xfs can spoil files during a cold restart) My cluster is working on ext4. I can disable the swift-object-auditor, so that I have no problems with files in the future? From matt at oliver.net.au Thu Feb 15 23:01:24 2018 From: matt at oliver.net.au (Matthew Oliver) Date: Fri, 16 Feb 2018 10:01:24 +1100 Subject: [Openstack] disable object-auditor for swift In-Reply-To: References: Message-ID: Hi Zorro, The object auditor is protecting you from bit rot. And is the service that will find corrupted files and quarantine them. So is rather important to the health of your cluster. It isn't just there because of XFS. We usually recommend XFS as when we store object metadata we store it as extended attributes (xattrs) with the object. Ext4 only limits xattr space to block size (at least be default) XFS isn't limited in xattr space. Just a warning as you may have problems if a user attaches too much metadata to an object. Back to the auditor, there are auditor options you could make use of if you wanted to "turn it down" a bit. Like changing the `interval` or tuning the `files_per_second`, `bytes_per_second` or `zero_byte_files_per_second`. If the main problem is it taking all your iops you can also use the `nice_priority`, `ionice_class` and `ionice_priority` options to tell the auditor to play nicer. See the sample config file for configuration options and some description. The commented out options and values are the defaults: https://github.com/openstack/swift/blob/stable/pike/etc/object-server.conf-sample#L388-L430 Regards, Matt On Fri, Feb 16, 2018 at 12:03 AM, зоррыч wrote: > hi > > I have a swift cluster installed (without using an openstack) > > I ran into the problem of a high high disk utilization > > > I found out that the main load is given to the server > /usr/bin/swift-object-auditor > From the documentation, I learned that this process checks the files on > disks (xfs can spoil files during a cold restart) > > My cluster is working on ext4. > > I can disable the swift-object-auditor, so that I have no problems with > files in the future? > > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstac > k > Post to : openstack at lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstac > k > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sashang at gmail.com Sat Feb 17 08:49:00 2018 From: sashang at gmail.com (Sashan Govender) Date: Sat, 17 Feb 2018 08:49:00 +0000 Subject: [Openstack] requirements on a user to create an image in glance Message-ID: Hi What requirements must an openstack user meet to be able to create images in glance. I can create an image as a user as shown below but when I try to list the images as the same user it is not shown. xsasgov at deimos:sles12.2 test-utils/cloud/cloud-tool master $ glance --os-user-domain-name Default --os-tenant-id 33b174b1f999445a9cc4090938f80704 --os-tenant-name lda --os-password rootroot --os-username xsasgov --os-auth-url http://192.168.122.186:5000/v3 image-create --name lda-image --disk-format qcow2 --container-format bare --file testimage.qcow2 +------------------+--------------------------------------+ | Property | Value | +------------------+--------------------------------------+ | checksum | 11a0609ef8c758fafc722529fbbbc487 | | container_format | bare | | created_at | 2018-02-17T02:46:33.000000 | | deleted | False | | deleted_at | None | | disk_format | qcow2 | | id | 4d3a32b1-2050-4986-9b93-306571ccaa1f | | is_public | False | | min_disk | 0 | | min_ram | 0 | | name | lda-image | | owner | None | | protected | False | | size | 534773760 | | status | active | | updated_at | 2018-02-17T02:46:37.000000 | | virtual_size | None | +------------------+--------------------------------------+ xsasgov at deimos:sles12.2 test-utils/cloud/cloud-tool master $ glance --os-user-domain-name Default --os-tenant-id 33b174b1f999445a9cc4090938f80704 --os-tenant-name lda --os-password rootroot --os-username xsasgov --os-project-name lda --os-auth-url http://192.168.122.186:5000/v3 image-list +----+------+-------------+------------------+------+--------+ | ID | Name | Disk Format | Container Format | Size | Status | +----+------+-------------+------------------+------+--------+ +----+------+-------------+------------------+------+--------+ However the image is there. If I login to the openstack server and use the openstack admin account to list the images then it is appears. So I'm wondering what I've missed when setting up the non-admin user? Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From giorgis at acmac.uoc.gr Sat Feb 17 11:08:38 2018 From: giorgis at acmac.uoc.gr (Georgios Dimitrakakis) Date: Sat, 17 Feb 2018 13:08:38 +0200 Subject: [Openstack] qemu version for OpenStack Icehouse In-Reply-To: <00a901d3a5fb$4c71b110$e5551330$@gmail.com> References: <00a901d3a5fb$4c71b110$e5551330$@gmail.com> Message-ID: Thank you very much both Erik and Bernd. Indeed the problem was with the image file version. Although "qemu-img convert -O qcow2 " didn't work I have re-created using a new QCOW image and that could be used without a problem. Thanks you once again for pointing me to the correct direction. Best regards, G. > The problem is not the qemu version, but the image file version. More > recently, qcow3 seems to be used; Icehouse probably uses qcow2. > > I think you have a number of options. > > The easiest approach might be converting the image to qcow2: > > > https://ask.openstack.org/en/question/84506/how-to-convert-qcow3-image-to-qcow2/ > > Or convert it to raw. Main inconvenience is the increased size. > > Or create a volume from the image, and on the destination system, > boot from the volume and snapshot the instance to create an image > again. > > -----Original Message----- > From: Georgios Dimitrakakis [mailto:giorgis at acmac.uoc.gr] > Sent: Thursday, February 15, 2018 7:28 AM > To: Openstack > Subject: [Openstack] qemu version for OpenStack Icehouse > > Dear all, > > I am trying to build a Windows image on a rather new Ubuntu system > which image would be imported and used on an old OpenStack Icehouse > installation. > > The system on which I am building it has the following > characteristics: > > Distributor ID: Ubuntu > Description: Ubuntu 16.04.3 LTS > Release: 16.04 > Codename: xenial > Kernel: 4.4.0-112-generic > > and the installed QEMU packages are: > > ii qemu-block-extra:amd64 > 1:2.5+dfsg-5ubuntu10.20 > amd64 extra block backend modules for > qemu-system and qemu-utils > ii qemu-kvm > 1:2.5+dfsg-5ubuntu10.20 > amd64 QEMU Full virtualization > ii qemu-system-common > 1:2.5+dfsg-5ubuntu10.20 > amd64 QEMU full system emulation binaries > (common files) > ii qemu-system-x86 > 1:2.5+dfsg-5ubuntu10.20 > amd64 QEMU full system emulation binaries > (x86) > ii qemu-utils > 1:2.5+dfsg-5ubuntu10.20 > amd64 QEMU utilities > > > The error that I am getting whey I try to launch the VM is the > following: > > 'ProcessExecutionError: Unexpected error while running > command.\nCommand: env LC_ALL=C LANG=C qemu-img info > > /var/lib/nova/instances/_base/ae8e48565dd0b934afe98a17febc0660077d7e35.part\nExit > code: 1\nStdout: \'\'\nStderr: "\'image\' uses a qcow2 feature which > is not supported by this qemu version: QCOW version 3\\nCould not > open > > > \'/var/lib/nova/instances/_base/ae8e48565dd0b934afe98a17febc0660077d7e35.part\': > > Operation not supported\\n"\n' > > > Using the same image on a newer OpenStack Ocata installation works > fine and the VM boots up and works without a problem. > > Obviously there is a version mismatch and something on this image is > not supported by OpenStack Icehouse. > > Do you know where the problem could be? Is there a way to build it > with backwards compatibility or can someone point out to me the > latest versions that would work with OpenStack Icehouse? > > Unfortunately for the moment it's not possible to upgrade OpenStack. > We will in the next few months but for the moment is important to > have it working. > > Looking forward for your answers! > > Best regards, > > G. > > > _______________________________________________ > Mailing list: > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Post to : openstack at lists.openstack.org > Unsubscribe : > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack From remo at italy1.com Sun Feb 18 01:02:42 2018 From: remo at italy1.com (remo at italy1.com) Date: Sat, 17 Feb 2018 17:02:42 -0800 Subject: [Openstack] requirements on a user to create an image in glance In-Reply-To: References: Message-ID: Content-Type: multipart/alternative; boundary="=_3243a3fe619b2a062e425706f4903d53" --=_3243a3fe619b2a062e425706f4903d53 Content-Transfer-Encoding: base64 Content-Type: text/plain; charset=utf-8 QXMgYSB1c2VyIHlvdSBjYW5ub3QgbWFrZSBhbiBpbWFnZSBwdWJsaWMgb25seSBhZG1pbiBjYW4u IA0KDQpZb3UgY2FuIGFzayBhZG1pbiB0byBtYWtlIGl0IHB1YmxpYw0KWW91IGNhbiBzaGFyZSBp dCB3aXRoIHlvdXIgb3RoZXIgcHJvamVjdHMgDQoNCu+jvyBkYWwgbWlvIGlQaG9uZSBYIA0KDQo+ IElsIGdpb3JubyAxNyBmZWIgMjAxOCwgYWxsZSBvcmUgMDA6NDksIFNhc2hhbiBHb3ZlbmRlciA8 c2FzaGFuZ0BnbWFpbC5jb20+IGhhIHNjcml0dG86DQo+IA0KPiBIaQ0KPiANCj4gV2hhdCByZXF1 aXJlbWVudHMgbXVzdCBhbiBvcGVuc3RhY2sgdXNlciBtZWV0IHRvIGJlIGFibGUgdG8gY3JlYXRl IGltYWdlcyBpbiBnbGFuY2UuIEkgY2FuIGNyZWF0ZSBhbiBpbWFnZSBhcyBhIHVzZXIgYXMgc2hv d24gYmVsb3cgYnV0IHdoZW4gSSB0cnkgdG8gbGlzdCB0aGUgaW1hZ2VzIGFzIHRoZSBzYW1lIHVz ZXIgaXQgaXMgbm90IHNob3duLg0KPiANCj4geHNhc2dvdkBkZWltb3M6c2xlczEyLjIgdGVzdC11 dGlscy9jbG91ZC9jbG91ZC10b29sIG1hc3RlciAkIGdsYW5jZSAtLW9zLXVzZXItZG9tYWluLW5h bWUgRGVmYXVsdCAtLW9zLXRlbmFudC1pZCAzM2IxNzRiMWY5OTk0NDVhOWNjNDA5MDkzOGY4MDcw NCAtLW9zLXRlbmFudC1uYW1lIGxkYSAtLW9zLXBhc3N3b3JkIHJvb3Ryb290IC0tb3MtdXNlcm5h bWUgeHNhc2dvdiAtLW9zLWF1dGgtdXJsIGh0dHA6Ly8xOTIuMTY4LjEyMi4xODY6NTAwMC92MyAg aW1hZ2UtY3JlYXRlIC0tbmFtZSBsZGEtaW1hZ2UgLS1kaXNrLWZvcm1hdCBxY293MiAtLWNvbnRh aW5lci1mb3JtYXQgYmFyZSAtLWZpbGUgdGVzdGltYWdlLnFjb3cyIA0KPiANCj4gICstLS0tLS0t LS0tLS0tLS0tLS0rLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0rDQo+IHwg UHJvcGVydHkgICAgICAgICB8IFZhbHVlICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICB8 DQo+ICstLS0tLS0tLS0tLS0tLS0tLS0rLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0rDQo+IHwgY2hlY2tzdW0gICAgICAgICB8IDExYTA2MDllZjhjNzU4ZmFmYzcyMjUyOWZi YmJjNDg3ICAgICB8DQo+IHwgY29udGFpbmVyX2Zvcm1hdCB8IGJhcmUgICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICB8DQo+IHwgY3JlYXRlZF9hdCAgICAgICB8IDIwMTgtMDItMTdUMDI6 NDY6MzMuMDAwMDAwICAgICAgICAgICB8DQo+IHwgZGVsZXRlZCAgICAgICAgICB8IEZhbHNlICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICB8DQo+IHwgZGVsZXRlZF9hdCAgICAgICB8IE5v bmUgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICB8DQo+IHwgZGlza19mb3JtYXQgICAg ICB8IHFjb3cyICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICB8DQo+IHwgaWQgICAgICAg ICAgICAgICB8IDRkM2EzMmIxLTIwNTAtNDk4Ni05YjkzLTMwNjU3MWNjYWExZiB8DQo+IHwgaXNf cHVibGljICAgICAgICB8IEZhbHNlICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICB8DQo+ IHwgbWluX2Rpc2sgICAgICAgICB8IDAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICB8DQo+IHwgbWluX3JhbSAgICAgICAgICB8IDAgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgICAgICB8DQo+IHwgbmFtZSAgICAgICAgICAgICB8IGxkYS1pbWFnZSAgICAgICAgICAgICAg ICAgICAgICAgICAgICB8DQo+IHwgb3duZXIgICAgICAgICAgICB8IE5vbmUgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICB8DQo+IHwgcHJvdGVjdGVkICAgICAgICB8IEZhbHNlICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICB8DQo+IHwgc2l6ZSAgICAgICAgICAgICB8IDUzNDc3 Mzc2MCAgICAgICAgICAgICAgICAgICAgICAgICAgICB8DQo+IHwgc3RhdHVzICAgICAgICAgICB8 IGFjdGl2ZSAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICB8DQo+IHwgdXBkYXRlZF9hdCAg ICAgICB8IDIwMTgtMDItMTdUMDI6NDY6MzcuMDAwMDAwICAgICAgICAgICB8DQo+IHwgdmlydHVh bF9zaXplICAgICB8IE5vbmUgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICB8DQo+ICst LS0tLS0tLS0tLS0tLS0tLS0rLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0r DQo+IHhzYXNnb3ZAZGVpbW9zOnNsZXMxMi4yIHRlc3QtdXRpbHMvY2xvdWQvY2xvdWQtdG9vbCBt YXN0ZXIgJCBnbGFuY2UgLS1vcy11c2VyLWRvbWFpbi1uYW1lIERlZmF1bHQgLS1vcy10ZW5hbnQt aWQgMzNiMTc0YjFmOTk5NDQ1YTljYzQwOTA5MzhmODA3MDQgLS1vcy10ZW5hbnQtbmFtZSBsZGEg LS1vcy1wYXNzd29yZCByb290cm9vdCAtLW9zLXVzZXJuYW1lIHhzYXNnb3YgLS1vcy1wcm9qZWN0 LW5hbWUgbGRhIC0tb3MtYXV0aC11cmwgaHR0cDovLzE5Mi4xNjguMTIyLjE4Njo1MDAwL3YzICBp bWFnZS1saXN0DQo+ICstLS0tKy0tLS0tLSstLS0tLS0tLS0tLS0tKy0tLS0tLS0tLS0tLS0tLS0t LSstLS0tLS0rLS0tLS0tLS0rDQo+IHwgSUQgfCBOYW1lIHwgRGlzayBGb3JtYXQgfCBDb250YWlu ZXIgRm9ybWF0IHwgU2l6ZSB8IFN0YXR1cyB8DQo+ICstLS0tKy0tLS0tLSstLS0tLS0tLS0tLS0t Ky0tLS0tLS0tLS0tLS0tLS0tLSstLS0tLS0rLS0tLS0tLS0rDQo+ICstLS0tKy0tLS0tLSstLS0t LS0tLS0tLS0tKy0tLS0tLS0tLS0tLS0tLS0tLSstLS0tLS0rLS0tLS0tLS0rDQo+IA0KPiANCj4g SG93ZXZlciB0aGUgaW1hZ2UgaXMgdGhlcmUuIElmIEkgbG9naW4gdG8gdGhlIG9wZW5zdGFjayBz ZXJ2ZXIgYW5kIHVzZSB0aGUgb3BlbnN0YWNrIGFkbWluIGFjY291bnQgdG8gbGlzdCB0aGUgaW1h Z2VzIHRoZW4gaXQgaXMgYXBwZWFycy4gU28gSSdtIHdvbmRlcmluZyB3aGF0IEkndmUgbWlzc2Vk IHdoZW4gc2V0dGluZyB1cCB0aGUgbm9uLWFkbWluIHVzZXI/DQo+IA0KPiBUaGFua3MNCj4gDQo+ IA0KPiBfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXw0KPiBN YWlsaW5nIGxpc3Q6IGh0dHA6Ly9saXN0cy5vcGVuc3RhY2sub3JnL2NnaS1iaW4vbWFpbG1hbi9s aXN0aW5mby9vcGVuc3RhY2sNCj4gUG9zdCB0byAgICAgOiBvcGVuc3RhY2tAbGlzdHMub3BlbnN0 YWNrLm9yZw0KPiBVbnN1YnNjcmliZSA6IGh0dHA6Ly9saXN0cy5vcGVuc3RhY2sub3JnL2NnaS1i aW4vbWFpbG1hbi9saXN0aW5mby9vcGVuc3RhY2sNCg== --=_3243a3fe619b2a062e425706f4903d53 Content-Transfer-Encoding: base64 Content-Type: text/html; charset=utf-8 PGh0bWw+PGhlYWQ+PG1ldGEgaHR0cC1lcXVpdj0iY29udGVudC10eXBlIiBjb250ZW50PSJ0ZXh0 L2h0bWw7IGNoYXJzZXQ9dXRmLTgiPjwvaGVhZD48Ym9keSBkaXI9ImF1dG8iPkFzIGEgdXNlciB5 b3UgY2Fubm90IG1ha2UgYW4gaW1hZ2UgcHVibGljIG9ubHkgYWRtaW4gY2FuLiZuYnNwOzxkaXY+ PGJyPjwvZGl2PjxkaXY+WW91IGNhbiBhc2sgYWRtaW4gdG8gbWFrZSBpdCBwdWJsaWM8L2Rpdj48 ZGl2PllvdSBjYW4gc2hhcmUgaXQgd2l0aCB5b3VyIG90aGVyIHByb2plY3RzJm5ic3A7PGJyPjxi cj48ZGl2IGlkPSJBcHBsZU1haWxTaWduYXR1cmUiPu+jvyBkYWwgbWlvIGlQaG9uZSBYJm5ic3A7 PC9kaXY+PGRpdj48YnI+SWwgZ2lvcm5vIDE3IGZlYiAyMDE4LCBhbGxlIG9yZSAwMDo0OSwgU2Fz aGFuIEdvdmVuZGVyICZsdDs8YSBocmVmPSJtYWlsdG86c2FzaGFuZ0BnbWFpbC5jb20iPnNhc2hh bmdAZ21haWwuY29tPC9hPiZndDsgaGEgc2NyaXR0bzo8YnI+PGJyPjwvZGl2PjxibG9ja3F1b3Rl IHR5cGU9ImNpdGUiPjxkaXY+PGRpdiBkaXI9Imx0ciI+PGRpdj5IaTwvZGl2PjxkaXY+PGJyPjwv ZGl2PjxkaXY+V2hhdCByZXF1aXJlbWVudHMgbXVzdCBhbiBvcGVuc3RhY2sgdXNlciBtZWV0IHRv IGJlIGFibGUgdG8gY3JlYXRlIGltYWdlcyBpbiBnbGFuY2UuIEkgY2FuIGNyZWF0ZSBhbiBpbWFn ZSBhcyBhIHVzZXIgYXMgc2hvd24gYmVsb3cgYnV0IHdoZW4gSSB0cnkgdG8gbGlzdCB0aGUgaW1h Z2VzIGFzIHRoZSBzYW1lIHVzZXIgaXQgaXMgbm90IHNob3duLjwvZGl2PjxkaXY+PGJyPjwvZGl2 PjxkaXY+eHNhc2dvdkBkZWltb3M6c2xlczEyLjIgdGVzdC11dGlscy9jbG91ZC9jbG91ZC10b29s IG1hc3RlciAkIGdsYW5jZSAtLW9zLXVzZXItZG9tYWluLW5hbWUgRGVmYXVsdCAtLW9zLXRlbmFu dC1pZCAzM2IxNzRiMWY5OTk0NDVhOWNjNDA5MDkzOGY4MDcwNCAtLW9zLXRlbmFudC1uYW1lIGxk YSAtLW9zLXBhc3N3b3JkIHJvb3Ryb290IC0tb3MtdXNlcm5hbWUgeHNhc2dvdiAtLW9zLWF1dGgt dXJsIDxhIGhyZWY9Imh0dHA6Ly8xOTIuMTY4LjEyMi4xODY6NTAwMC92MyI+aHR0cDovLzE5Mi4x NjguMTIyLjE4Njo1MDAwL3YzPC9hPiZuYnNwOyBpbWFnZS1jcmVhdGUgLS1uYW1lIGxkYS1pbWFn ZSAtLWRpc2stZm9ybWF0IHFjb3cyIC0tY29udGFpbmVyLWZvcm1hdCBiYXJlIC0tZmlsZSB0ZXN0 aW1hZ2UucWNvdzImbmJzcDs8L2Rpdj48ZGl2Pjxicj48L2Rpdj48ZGl2PiZuYnNwOystLS0tLS0t LS0tLS0tLS0tLS0rLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0rPC9kaXY+ PGRpdj58IFByb3BlcnR5Jm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwO3wgVmFsdWUm bmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZu YnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgfDwv ZGl2PjxkaXY+Ky0tLS0tLS0tLS0tLS0tLS0tLSstLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLSs8L2Rpdj48ZGl2PnwgY2hlY2tzdW0mbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJz cDsgJm5ic3A7fCAxMWEwNjA5ZWY4Yzc1OGZhZmM3MjI1MjlmYmJiYzQ4NyZuYnNwOyAmbmJzcDsg Jm5ic3A7fDwvZGl2PjxkaXY+fCBjb250YWluZXJfZm9ybWF0IHwgYmFyZSZuYnNwOyAmbmJzcDsg Jm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAm bmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDt8PC9kaXY+PGRp dj58IGNyZWF0ZWRfYXQmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDt8IDIwMTgtMDItMTdUMDI6 NDY6MzMuMDAwMDAwJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDt8PC9k aXY+PGRpdj58IGRlbGV0ZWQmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7IHwgRmFs c2UmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7 ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsg fDwvZGl2PjxkaXY+fCBkZWxldGVkX2F0Jm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7fCBOb25l Jm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAm bmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZu YnNwO3w8L2Rpdj48ZGl2PnwgZGlza19mb3JtYXQmbmJzcDsgJm5ic3A7ICZuYnNwOyB8IHFjb3cy Jm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAm bmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7IHw8 L2Rpdj48ZGl2PnwgaWQmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAm bmJzcDsgJm5ic3A7fCA0ZDNhMzJiMS0yMDUwLTQ5ODYtOWI5My0zMDY1NzFjY2FhMWYgfDwvZGl2 PjxkaXY+fCBpc19wdWJsaWMmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgfCBGYWxzZSZuYnNw OyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7 ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyB8PC9kaXY+ PGRpdj58IG1pbl9kaXNrJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwO3wgMCZuYnNw OyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7 ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsg Jm5ic3A7IHw8L2Rpdj48ZGl2PnwgbWluX3JhbSZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAm bmJzcDsgfCAwJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7 ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsg Jm5ic3A7ICZuYnNwOyAmbmJzcDsgfDwvZGl2PjxkaXY+fCBuYW1lJm5ic3A7ICZuYnNwOyAmbmJz cDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7fCBsZGEtaW1hZ2UmbmJzcDsgJm5ic3A7ICZu YnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5i c3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7IHw8L2Rpdj48ZGl2Pnwgb3duZXImbmJzcDsgJm5ic3A7 ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyB8IE5vbmUmbmJzcDsgJm5ic3A7ICZuYnNwOyAm bmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZu YnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7fDwvZGl2PjxkaXY+fCBwcm90 ZWN0ZWQmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgfCBGYWxzZSZuYnNwOyAmbmJzcDsgJm5i c3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJz cDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyB8PC9kaXY+PGRpdj58IHNpemUm bmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDt8IDUzNDc3Mzc2 MCZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsg Jm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgfDwvZGl2PjxkaXY+fCBz dGF0dXMmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwO3wgYWN0aXZlJm5i c3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJz cDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7fDwvZGl2 PjxkaXY+fCB1cGRhdGVkX2F0Jm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7fCAyMDE4LTAyLTE3 VDAyOjQ2OjM3LjAwMDAwMCZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7 fDwvZGl2PjxkaXY+fCB2aXJ0dWFsX3NpemUmbmJzcDsgJm5ic3A7ICZuYnNwO3wgTm9uZSZuYnNw OyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7 ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDt8 PC9kaXY+PGRpdj4rLS0tLS0tLS0tLS0tLS0tLS0tKy0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tKzwvZGl2PjxkaXY+eHNhc2dvdkBkZWltb3M6c2xlczEyLjIgdGVzdC11dGls cy9jbG91ZC9jbG91ZC10b29sIG1hc3RlciAkIGdsYW5jZSAtLW9zLXVzZXItZG9tYWluLW5hbWUg RGVmYXVsdCAtLW9zLXRlbmFudC1pZCAzM2IxNzRiMWY5OTk0NDVhOWNjNDA5MDkzOGY4MDcwNCAt LW9zLXRlbmFudC1uYW1lIGxkYSAtLW9zLXBhc3N3b3JkIHJvb3Ryb290IC0tb3MtdXNlcm5hbWUg eHNhc2dvdiAtLW9zLXByb2plY3QtbmFtZSBsZGEgLS1vcy1hdXRoLXVybCA8YSBocmVmPSJodHRw Oi8vMTkyLjE2OC4xMjIuMTg2OjUwMDAvdjMiPmh0dHA6Ly8xOTIuMTY4LjEyMi4xODY6NTAwMC92 MzwvYT4mbmJzcDsgaW1hZ2UtbGlzdDwvZGl2PjxkaXY+Ky0tLS0rLS0tLS0tKy0tLS0tLS0tLS0t LS0rLS0tLS0tLS0tLS0tLS0tLS0tKy0tLS0tLSstLS0tLS0tLSs8L2Rpdj48ZGl2PnwgSUQgfCBO YW1lIHwgRGlzayBGb3JtYXQgfCBDb250YWluZXIgRm9ybWF0IHwgU2l6ZSB8IFN0YXR1cyB8PC9k aXY+PGRpdj4rLS0tLSstLS0tLS0rLS0tLS0tLS0tLS0tLSstLS0tLS0tLS0tLS0tLS0tLS0rLS0t LS0tKy0tLS0tLS0tKzwvZGl2PjxkaXY+Ky0tLS0rLS0tLS0tKy0tLS0tLS0tLS0tLS0rLS0tLS0t LS0tLS0tLS0tLS0tKy0tLS0tLSstLS0tLS0tLSs8L2Rpdj48ZGl2Pjxicj48L2Rpdj48ZGl2Pjxi cj48L2Rpdj48ZGl2Pkhvd2V2ZXIgdGhlIGltYWdlIGlzIHRoZXJlLiBJZiBJIGxvZ2luIHRvIHRo ZSBvcGVuc3RhY2sgc2VydmVyIGFuZCB1c2UgdGhlIG9wZW5zdGFjayBhZG1pbiBhY2NvdW50IHRv IGxpc3QgdGhlIGltYWdlcyB0aGVuIGl0IGlzIGFwcGVhcnMuIFNvIEknbSB3b25kZXJpbmcgd2hh dCBJJ3ZlIG1pc3NlZCB3aGVuIHNldHRpbmcgdXAgdGhlIG5vbi1hZG1pbiB1c2VyPzwvZGl2Pjxk aXY+PGJyPjwvZGl2PjxkaXY+VGhhbmtzPC9kaXY+PGRpdj48YnI+PC9kaXY+PGRpdj48YnI+PC9k aXY+PC9kaXY+DQo8L2Rpdj48L2Jsb2NrcXVvdGU+PGJsb2NrcXVvdGUgdHlwZT0iY2l0ZSI+PGRp dj48c3Bhbj5fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXzwv c3Bhbj48YnI+PHNwYW4+TWFpbGluZyBsaXN0OiA8YSBocmVmPSJodHRwOi8vbGlzdHMub3BlbnN0 YWNrLm9yZy9jZ2ktYmluL21haWxtYW4vbGlzdGluZm8vb3BlbnN0YWNrIj5odHRwOi8vbGlzdHMu b3BlbnN0YWNrLm9yZy9jZ2ktYmluL21haWxtYW4vbGlzdGluZm8vb3BlbnN0YWNrPC9hPjwvc3Bh bj48YnI+PHNwYW4+UG9zdCB0byAmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDs6IDxhIGhyZWY9Im1h aWx0bzpvcGVuc3RhY2tAbGlzdHMub3BlbnN0YWNrLm9yZyI+b3BlbnN0YWNrQGxpc3RzLm9wZW5z dGFjay5vcmc8L2E+PC9zcGFuPjxicj48c3Bhbj5VbnN1YnNjcmliZSA6IDxhIGhyZWY9Imh0dHA6 Ly9saXN0cy5vcGVuc3RhY2sub3JnL2NnaS1iaW4vbWFpbG1hbi9saXN0aW5mby9vcGVuc3RhY2si Pmh0dHA6Ly9saXN0cy5vcGVuc3RhY2sub3JnL2NnaS1iaW4vbWFpbG1hbi9saXN0aW5mby9vcGVu c3RhY2s8L2E+PC9zcGFuPjxicj48L2Rpdj48L2Jsb2NrcXVvdGU+PC9kaXY+PC9ib2R5PjwvaHRt bD4= --=_3243a3fe619b2a062e425706f4903d53-- From remo at italy1.com Sun Feb 18 01:02:42 2018 From: remo at italy1.com (remo at italy1.com) Date: Sat, 17 Feb 2018 17:02:42 -0800 Subject: [Openstack] requirements on a user to create an image in glance In-Reply-To: References: Message-ID: Content-Type: multipart/alternative; boundary="=_3243a3fe619b2a062e425706f4903d53" --=_3243a3fe619b2a062e425706f4903d53 Content-Transfer-Encoding: base64 Content-Type: text/plain; charset=utf-8 QXMgYSB1c2VyIHlvdSBjYW5ub3QgbWFrZSBhbiBpbWFnZSBwdWJsaWMgb25seSBhZG1pbiBjYW4u IA0KDQpZb3UgY2FuIGFzayBhZG1pbiB0byBtYWtlIGl0IHB1YmxpYw0KWW91IGNhbiBzaGFyZSBp dCB3aXRoIHlvdXIgb3RoZXIgcHJvamVjdHMgDQoNCu+jvyBkYWwgbWlvIGlQaG9uZSBYIA0KDQo+ IElsIGdpb3JubyAxNyBmZWIgMjAxOCwgYWxsZSBvcmUgMDA6NDksIFNhc2hhbiBHb3ZlbmRlciA8 c2FzaGFuZ0BnbWFpbC5jb20+IGhhIHNjcml0dG86DQo+IA0KPiBIaQ0KPiANCj4gV2hhdCByZXF1 aXJlbWVudHMgbXVzdCBhbiBvcGVuc3RhY2sgdXNlciBtZWV0IHRvIGJlIGFibGUgdG8gY3JlYXRl IGltYWdlcyBpbiBnbGFuY2UuIEkgY2FuIGNyZWF0ZSBhbiBpbWFnZSBhcyBhIHVzZXIgYXMgc2hv d24gYmVsb3cgYnV0IHdoZW4gSSB0cnkgdG8gbGlzdCB0aGUgaW1hZ2VzIGFzIHRoZSBzYW1lIHVz ZXIgaXQgaXMgbm90IHNob3duLg0KPiANCj4geHNhc2dvdkBkZWltb3M6c2xlczEyLjIgdGVzdC11 dGlscy9jbG91ZC9jbG91ZC10b29sIG1hc3RlciAkIGdsYW5jZSAtLW9zLXVzZXItZG9tYWluLW5h bWUgRGVmYXVsdCAtLW9zLXRlbmFudC1pZCAzM2IxNzRiMWY5OTk0NDVhOWNjNDA5MDkzOGY4MDcw NCAtLW9zLXRlbmFudC1uYW1lIGxkYSAtLW9zLXBhc3N3b3JkIHJvb3Ryb290IC0tb3MtdXNlcm5h bWUgeHNhc2dvdiAtLW9zLWF1dGgtdXJsIGh0dHA6Ly8xOTIuMTY4LjEyMi4xODY6NTAwMC92MyAg aW1hZ2UtY3JlYXRlIC0tbmFtZSBsZGEtaW1hZ2UgLS1kaXNrLWZvcm1hdCBxY293MiAtLWNvbnRh aW5lci1mb3JtYXQgYmFyZSAtLWZpbGUgdGVzdGltYWdlLnFjb3cyIA0KPiANCj4gICstLS0tLS0t LS0tLS0tLS0tLS0rLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0rDQo+IHwg UHJvcGVydHkgICAgICAgICB8IFZhbHVlICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICB8 DQo+ICstLS0tLS0tLS0tLS0tLS0tLS0rLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0rDQo+IHwgY2hlY2tzdW0gICAgICAgICB8IDExYTA2MDllZjhjNzU4ZmFmYzcyMjUyOWZi YmJjNDg3ICAgICB8DQo+IHwgY29udGFpbmVyX2Zvcm1hdCB8IGJhcmUgICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICB8DQo+IHwgY3JlYXRlZF9hdCAgICAgICB8IDIwMTgtMDItMTdUMDI6 NDY6MzMuMDAwMDAwICAgICAgICAgICB8DQo+IHwgZGVsZXRlZCAgICAgICAgICB8IEZhbHNlICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICB8DQo+IHwgZGVsZXRlZF9hdCAgICAgICB8IE5v bmUgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICB8DQo+IHwgZGlza19mb3JtYXQgICAg ICB8IHFjb3cyICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICB8DQo+IHwgaWQgICAgICAg ICAgICAgICB8IDRkM2EzMmIxLTIwNTAtNDk4Ni05YjkzLTMwNjU3MWNjYWExZiB8DQo+IHwgaXNf cHVibGljICAgICAgICB8IEZhbHNlICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICB8DQo+ IHwgbWluX2Rpc2sgICAgICAgICB8IDAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICB8DQo+IHwgbWluX3JhbSAgICAgICAgICB8IDAgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgICAgICB8DQo+IHwgbmFtZSAgICAgICAgICAgICB8IGxkYS1pbWFnZSAgICAgICAgICAgICAg ICAgICAgICAgICAgICB8DQo+IHwgb3duZXIgICAgICAgICAgICB8IE5vbmUgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICB8DQo+IHwgcHJvdGVjdGVkICAgICAgICB8IEZhbHNlICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICB8DQo+IHwgc2l6ZSAgICAgICAgICAgICB8IDUzNDc3 Mzc2MCAgICAgICAgICAgICAgICAgICAgICAgICAgICB8DQo+IHwgc3RhdHVzICAgICAgICAgICB8 IGFjdGl2ZSAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICB8DQo+IHwgdXBkYXRlZF9hdCAg ICAgICB8IDIwMTgtMDItMTdUMDI6NDY6MzcuMDAwMDAwICAgICAgICAgICB8DQo+IHwgdmlydHVh bF9zaXplICAgICB8IE5vbmUgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICB8DQo+ICst LS0tLS0tLS0tLS0tLS0tLS0rLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0r DQo+IHhzYXNnb3ZAZGVpbW9zOnNsZXMxMi4yIHRlc3QtdXRpbHMvY2xvdWQvY2xvdWQtdG9vbCBt YXN0ZXIgJCBnbGFuY2UgLS1vcy11c2VyLWRvbWFpbi1uYW1lIERlZmF1bHQgLS1vcy10ZW5hbnQt aWQgMzNiMTc0YjFmOTk5NDQ1YTljYzQwOTA5MzhmODA3MDQgLS1vcy10ZW5hbnQtbmFtZSBsZGEg LS1vcy1wYXNzd29yZCByb290cm9vdCAtLW9zLXVzZXJuYW1lIHhzYXNnb3YgLS1vcy1wcm9qZWN0 LW5hbWUgbGRhIC0tb3MtYXV0aC11cmwgaHR0cDovLzE5Mi4xNjguMTIyLjE4Njo1MDAwL3YzICBp bWFnZS1saXN0DQo+ICstLS0tKy0tLS0tLSstLS0tLS0tLS0tLS0tKy0tLS0tLS0tLS0tLS0tLS0t LSstLS0tLS0rLS0tLS0tLS0rDQo+IHwgSUQgfCBOYW1lIHwgRGlzayBGb3JtYXQgfCBDb250YWlu ZXIgRm9ybWF0IHwgU2l6ZSB8IFN0YXR1cyB8DQo+ICstLS0tKy0tLS0tLSstLS0tLS0tLS0tLS0t Ky0tLS0tLS0tLS0tLS0tLS0tLSstLS0tLS0rLS0tLS0tLS0rDQo+ICstLS0tKy0tLS0tLSstLS0t LS0tLS0tLS0tKy0tLS0tLS0tLS0tLS0tLS0tLSstLS0tLS0rLS0tLS0tLS0rDQo+IA0KPiANCj4g SG93ZXZlciB0aGUgaW1hZ2UgaXMgdGhlcmUuIElmIEkgbG9naW4gdG8gdGhlIG9wZW5zdGFjayBz ZXJ2ZXIgYW5kIHVzZSB0aGUgb3BlbnN0YWNrIGFkbWluIGFjY291bnQgdG8gbGlzdCB0aGUgaW1h Z2VzIHRoZW4gaXQgaXMgYXBwZWFycy4gU28gSSdtIHdvbmRlcmluZyB3aGF0IEkndmUgbWlzc2Vk IHdoZW4gc2V0dGluZyB1cCB0aGUgbm9uLWFkbWluIHVzZXI/DQo+IA0KPiBUaGFua3MNCj4gDQo+ IA0KPiBfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXw0KPiBN YWlsaW5nIGxpc3Q6IGh0dHA6Ly9saXN0cy5vcGVuc3RhY2sub3JnL2NnaS1iaW4vbWFpbG1hbi9s aXN0aW5mby9vcGVuc3RhY2sNCj4gUG9zdCB0byAgICAgOiBvcGVuc3RhY2tAbGlzdHMub3BlbnN0 YWNrLm9yZw0KPiBVbnN1YnNjcmliZSA6IGh0dHA6Ly9saXN0cy5vcGVuc3RhY2sub3JnL2NnaS1i aW4vbWFpbG1hbi9saXN0aW5mby9vcGVuc3RhY2sNCg== --=_3243a3fe619b2a062e425706f4903d53 Content-Transfer-Encoding: base64 Content-Type: text/html; charset=utf-8 PGh0bWw+PGhlYWQ+PG1ldGEgaHR0cC1lcXVpdj0iY29udGVudC10eXBlIiBjb250ZW50PSJ0ZXh0 L2h0bWw7IGNoYXJzZXQ9dXRmLTgiPjwvaGVhZD48Ym9keSBkaXI9ImF1dG8iPkFzIGEgdXNlciB5 b3UgY2Fubm90IG1ha2UgYW4gaW1hZ2UgcHVibGljIG9ubHkgYWRtaW4gY2FuLiZuYnNwOzxkaXY+ PGJyPjwvZGl2PjxkaXY+WW91IGNhbiBhc2sgYWRtaW4gdG8gbWFrZSBpdCBwdWJsaWM8L2Rpdj48 ZGl2PllvdSBjYW4gc2hhcmUgaXQgd2l0aCB5b3VyIG90aGVyIHByb2plY3RzJm5ic3A7PGJyPjxi cj48ZGl2IGlkPSJBcHBsZU1haWxTaWduYXR1cmUiPu+jvyBkYWwgbWlvIGlQaG9uZSBYJm5ic3A7 PC9kaXY+PGRpdj48YnI+SWwgZ2lvcm5vIDE3IGZlYiAyMDE4LCBhbGxlIG9yZSAwMDo0OSwgU2Fz aGFuIEdvdmVuZGVyICZsdDs8YSBocmVmPSJtYWlsdG86c2FzaGFuZ0BnbWFpbC5jb20iPnNhc2hh bmdAZ21haWwuY29tPC9hPiZndDsgaGEgc2NyaXR0bzo8YnI+PGJyPjwvZGl2PjxibG9ja3F1b3Rl IHR5cGU9ImNpdGUiPjxkaXY+PGRpdiBkaXI9Imx0ciI+PGRpdj5IaTwvZGl2PjxkaXY+PGJyPjwv ZGl2PjxkaXY+V2hhdCByZXF1aXJlbWVudHMgbXVzdCBhbiBvcGVuc3RhY2sgdXNlciBtZWV0IHRv IGJlIGFibGUgdG8gY3JlYXRlIGltYWdlcyBpbiBnbGFuY2UuIEkgY2FuIGNyZWF0ZSBhbiBpbWFn ZSBhcyBhIHVzZXIgYXMgc2hvd24gYmVsb3cgYnV0IHdoZW4gSSB0cnkgdG8gbGlzdCB0aGUgaW1h Z2VzIGFzIHRoZSBzYW1lIHVzZXIgaXQgaXMgbm90IHNob3duLjwvZGl2PjxkaXY+PGJyPjwvZGl2 PjxkaXY+eHNhc2dvdkBkZWltb3M6c2xlczEyLjIgdGVzdC11dGlscy9jbG91ZC9jbG91ZC10b29s IG1hc3RlciAkIGdsYW5jZSAtLW9zLXVzZXItZG9tYWluLW5hbWUgRGVmYXVsdCAtLW9zLXRlbmFu dC1pZCAzM2IxNzRiMWY5OTk0NDVhOWNjNDA5MDkzOGY4MDcwNCAtLW9zLXRlbmFudC1uYW1lIGxk YSAtLW9zLXBhc3N3b3JkIHJvb3Ryb290IC0tb3MtdXNlcm5hbWUgeHNhc2dvdiAtLW9zLWF1dGgt dXJsIDxhIGhyZWY9Imh0dHA6Ly8xOTIuMTY4LjEyMi4xODY6NTAwMC92MyI+aHR0cDovLzE5Mi4x NjguMTIyLjE4Njo1MDAwL3YzPC9hPiZuYnNwOyBpbWFnZS1jcmVhdGUgLS1uYW1lIGxkYS1pbWFn ZSAtLWRpc2stZm9ybWF0IHFjb3cyIC0tY29udGFpbmVyLWZvcm1hdCBiYXJlIC0tZmlsZSB0ZXN0 aW1hZ2UucWNvdzImbmJzcDs8L2Rpdj48ZGl2Pjxicj48L2Rpdj48ZGl2PiZuYnNwOystLS0tLS0t LS0tLS0tLS0tLS0rLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0rPC9kaXY+ PGRpdj58IFByb3BlcnR5Jm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwO3wgVmFsdWUm bmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZu YnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgfDwv ZGl2PjxkaXY+Ky0tLS0tLS0tLS0tLS0tLS0tLSstLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLSs8L2Rpdj48ZGl2PnwgY2hlY2tzdW0mbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJz cDsgJm5ic3A7fCAxMWEwNjA5ZWY4Yzc1OGZhZmM3MjI1MjlmYmJiYzQ4NyZuYnNwOyAmbmJzcDsg Jm5ic3A7fDwvZGl2PjxkaXY+fCBjb250YWluZXJfZm9ybWF0IHwgYmFyZSZuYnNwOyAmbmJzcDsg Jm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAm bmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDt8PC9kaXY+PGRp dj58IGNyZWF0ZWRfYXQmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDt8IDIwMTgtMDItMTdUMDI6 NDY6MzMuMDAwMDAwJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDt8PC9k aXY+PGRpdj58IGRlbGV0ZWQmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7IHwgRmFs c2UmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7 ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsg fDwvZGl2PjxkaXY+fCBkZWxldGVkX2F0Jm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7fCBOb25l Jm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAm bmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZu YnNwO3w8L2Rpdj48ZGl2PnwgZGlza19mb3JtYXQmbmJzcDsgJm5ic3A7ICZuYnNwOyB8IHFjb3cy Jm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAm bmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7IHw8 L2Rpdj48ZGl2PnwgaWQmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAm bmJzcDsgJm5ic3A7fCA0ZDNhMzJiMS0yMDUwLTQ5ODYtOWI5My0zMDY1NzFjY2FhMWYgfDwvZGl2 PjxkaXY+fCBpc19wdWJsaWMmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgfCBGYWxzZSZuYnNw OyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7 ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyB8PC9kaXY+ PGRpdj58IG1pbl9kaXNrJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwO3wgMCZuYnNw OyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7 ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsg Jm5ic3A7IHw8L2Rpdj48ZGl2PnwgbWluX3JhbSZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAm bmJzcDsgfCAwJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7 ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsg Jm5ic3A7ICZuYnNwOyAmbmJzcDsgfDwvZGl2PjxkaXY+fCBuYW1lJm5ic3A7ICZuYnNwOyAmbmJz cDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7fCBsZGEtaW1hZ2UmbmJzcDsgJm5ic3A7ICZu YnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5i c3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7IHw8L2Rpdj48ZGl2Pnwgb3duZXImbmJzcDsgJm5ic3A7 ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyB8IE5vbmUmbmJzcDsgJm5ic3A7ICZuYnNwOyAm bmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZu YnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7fDwvZGl2PjxkaXY+fCBwcm90 ZWN0ZWQmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgfCBGYWxzZSZuYnNwOyAmbmJzcDsgJm5i c3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJz cDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyB8PC9kaXY+PGRpdj58IHNpemUm bmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDt8IDUzNDc3Mzc2 MCZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsg Jm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgfDwvZGl2PjxkaXY+fCBz dGF0dXMmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwO3wgYWN0aXZlJm5i c3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJz cDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7fDwvZGl2 PjxkaXY+fCB1cGRhdGVkX2F0Jm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7fCAyMDE4LTAyLTE3 VDAyOjQ2OjM3LjAwMDAwMCZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7 fDwvZGl2PjxkaXY+fCB2aXJ0dWFsX3NpemUmbmJzcDsgJm5ic3A7ICZuYnNwO3wgTm9uZSZuYnNw OyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7 ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDt8 PC9kaXY+PGRpdj4rLS0tLS0tLS0tLS0tLS0tLS0tKy0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tKzwvZGl2PjxkaXY+eHNhc2dvdkBkZWltb3M6c2xlczEyLjIgdGVzdC11dGls cy9jbG91ZC9jbG91ZC10b29sIG1hc3RlciAkIGdsYW5jZSAtLW9zLXVzZXItZG9tYWluLW5hbWUg RGVmYXVsdCAtLW9zLXRlbmFudC1pZCAzM2IxNzRiMWY5OTk0NDVhOWNjNDA5MDkzOGY4MDcwNCAt LW9zLXRlbmFudC1uYW1lIGxkYSAtLW9zLXBhc3N3b3JkIHJvb3Ryb290IC0tb3MtdXNlcm5hbWUg eHNhc2dvdiAtLW9zLXByb2plY3QtbmFtZSBsZGEgLS1vcy1hdXRoLXVybCA8YSBocmVmPSJodHRw Oi8vMTkyLjE2OC4xMjIuMTg2OjUwMDAvdjMiPmh0dHA6Ly8xOTIuMTY4LjEyMi4xODY6NTAwMC92 MzwvYT4mbmJzcDsgaW1hZ2UtbGlzdDwvZGl2PjxkaXY+Ky0tLS0rLS0tLS0tKy0tLS0tLS0tLS0t LS0rLS0tLS0tLS0tLS0tLS0tLS0tKy0tLS0tLSstLS0tLS0tLSs8L2Rpdj48ZGl2PnwgSUQgfCBO YW1lIHwgRGlzayBGb3JtYXQgfCBDb250YWluZXIgRm9ybWF0IHwgU2l6ZSB8IFN0YXR1cyB8PC9k aXY+PGRpdj4rLS0tLSstLS0tLS0rLS0tLS0tLS0tLS0tLSstLS0tLS0tLS0tLS0tLS0tLS0rLS0t LS0tKy0tLS0tLS0tKzwvZGl2PjxkaXY+Ky0tLS0rLS0tLS0tKy0tLS0tLS0tLS0tLS0rLS0tLS0t LS0tLS0tLS0tLS0tKy0tLS0tLSstLS0tLS0tLSs8L2Rpdj48ZGl2Pjxicj48L2Rpdj48ZGl2Pjxi cj48L2Rpdj48ZGl2Pkhvd2V2ZXIgdGhlIGltYWdlIGlzIHRoZXJlLiBJZiBJIGxvZ2luIHRvIHRo ZSBvcGVuc3RhY2sgc2VydmVyIGFuZCB1c2UgdGhlIG9wZW5zdGFjayBhZG1pbiBhY2NvdW50IHRv IGxpc3QgdGhlIGltYWdlcyB0aGVuIGl0IGlzIGFwcGVhcnMuIFNvIEknbSB3b25kZXJpbmcgd2hh dCBJJ3ZlIG1pc3NlZCB3aGVuIHNldHRpbmcgdXAgdGhlIG5vbi1hZG1pbiB1c2VyPzwvZGl2Pjxk aXY+PGJyPjwvZGl2PjxkaXY+VGhhbmtzPC9kaXY+PGRpdj48YnI+PC9kaXY+PGRpdj48YnI+PC9k aXY+PC9kaXY+DQo8L2Rpdj48L2Jsb2NrcXVvdGU+PGJsb2NrcXVvdGUgdHlwZT0iY2l0ZSI+PGRp dj48c3Bhbj5fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXzwv c3Bhbj48YnI+PHNwYW4+TWFpbGluZyBsaXN0OiA8YSBocmVmPSJodHRwOi8vbGlzdHMub3BlbnN0 YWNrLm9yZy9jZ2ktYmluL21haWxtYW4vbGlzdGluZm8vb3BlbnN0YWNrIj5odHRwOi8vbGlzdHMu b3BlbnN0YWNrLm9yZy9jZ2ktYmluL21haWxtYW4vbGlzdGluZm8vb3BlbnN0YWNrPC9hPjwvc3Bh bj48YnI+PHNwYW4+UG9zdCB0byAmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDs6IDxhIGhyZWY9Im1h aWx0bzpvcGVuc3RhY2tAbGlzdHMub3BlbnN0YWNrLm9yZyI+b3BlbnN0YWNrQGxpc3RzLm9wZW5z dGFjay5vcmc8L2E+PC9zcGFuPjxicj48c3Bhbj5VbnN1YnNjcmliZSA6IDxhIGhyZWY9Imh0dHA6 Ly9saXN0cy5vcGVuc3RhY2sub3JnL2NnaS1iaW4vbWFpbG1hbi9saXN0aW5mby9vcGVuc3RhY2si Pmh0dHA6Ly9saXN0cy5vcGVuc3RhY2sub3JnL2NnaS1iaW4vbWFpbG1hbi9saXN0aW5mby9vcGVu c3RhY2s8L2E+PC9zcGFuPjxicj48L2Rpdj48L2Jsb2NrcXVvdGU+PC9kaXY+PC9ib2R5PjwvaHRt bD4= --=_3243a3fe619b2a062e425706f4903d53-- From sashang at gmail.com Sun Feb 18 22:54:09 2018 From: sashang at gmail.com (Sashan Govender) Date: Sun, 18 Feb 2018 22:54:09 +0000 Subject: [Openstack] requirements on a user to create an image in glance In-Reply-To: References: Message-ID: If a user A who is part of project P creates an image then can user B who is part of project P as well see it and use it? On Sun, Feb 18, 2018 at 12:02 PM wrote: > As a user you cannot make an image public only admin can. > > You can ask admin to make it public > You can share it with your other projects > >  dal mio iPhone X > > Il giorno 17 feb 2018, alle ore 00:49, Sashan Govender > ha scritto: > > Hi > > What requirements must an openstack user meet to be able to create images > in glance. I can create an image as a user as shown below but when I try to > list the images as the same user it is not shown. > > xsasgov at deimos:sles12.2 test-utils/cloud/cloud-tool master $ glance > --os-user-domain-name Default --os-tenant-id > 33b174b1f999445a9cc4090938f80704 --os-tenant-name lda --os-password > rootroot --os-username xsasgov --os-auth-url > http://192.168.122.186:5000/v3 image-create --name lda-image > --disk-format qcow2 --container-format bare --file testimage.qcow2 > > +------------------+--------------------------------------+ > | Property | Value | > +------------------+--------------------------------------+ > | checksum | 11a0609ef8c758fafc722529fbbbc487 | > | container_format | bare | > | created_at | 2018-02-17T02:46:33.000000 | > | deleted | False | > | deleted_at | None | > | disk_format | qcow2 | > | id | 4d3a32b1-2050-4986-9b93-306571ccaa1f | > | is_public | False | > | min_disk | 0 | > | min_ram | 0 | > | name | lda-image | > | owner | None | > | protected | False | > | size | 534773760 | > | status | active | > | updated_at | 2018-02-17T02:46:37.000000 | > | virtual_size | None | > +------------------+--------------------------------------+ > xsasgov at deimos:sles12.2 test-utils/cloud/cloud-tool master $ glance > --os-user-domain-name Default --os-tenant-id > 33b174b1f999445a9cc4090938f80704 --os-tenant-name lda --os-password > rootroot --os-username xsasgov --os-project-name lda --os-auth-url > http://192.168.122.186:5000/v3 image-list > +----+------+-------------+------------------+------+--------+ > | ID | Name | Disk Format | Container Format | Size | Status | > +----+------+-------------+------------------+------+--------+ > +----+------+-------------+------------------+------+--------+ > > > However the image is there. If I login to the openstack server and use the > openstack admin account to list the images then it is appears. So I'm > wondering what I've missed when setting up the non-admin user? > > Thanks > > > _______________________________________________ > Mailing list: > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Post to : openstack at lists.openstack.org > Unsubscribe : > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Remo at italy1.com Mon Feb 19 01:55:50 2018 From: Remo at italy1.com (Remo Mattei) Date: Sun, 18 Feb 2018 17:55:50 -0800 Subject: [Openstack] requirements on a user to create an image in glance In-Reply-To: References: Message-ID: if they are part of the same projects they can see the images https://docs.openstack.org/image-guide/share-images.html Hopefully that helps! > On Feb 18, 2018, at 14:54, Sashan Govender wrote: > > If a user A who is part of project P creates an image then can user B who is part of project P as well see it and use it? > > On Sun, Feb 18, 2018 at 12:02 PM > wrote: > As a user you cannot make an image public only admin can. > > You can ask admin to make it public > You can share it with your other projects > >  dal mio iPhone X > > Il giorno 17 feb 2018, alle ore 00:49, Sashan Govender > ha scritto: > >> Hi >> >> What requirements must an openstack user meet to be able to create images in glance. I can create an image as a user as shown below but when I try to list the images as the same user it is not shown. >> >> xsasgov at deimos:sles12.2 test-utils/cloud/cloud-tool master $ glance --os-user-domain-name Default --os-tenant-id 33b174b1f999445a9cc4090938f80704 --os-tenant-name lda --os-password rootroot --os-username xsasgov --os-auth-url http://192.168.122.186:5000/v3 image-create --name lda-image --disk-format qcow2 --container-format bare --file testimage.qcow2 >> >> +------------------+--------------------------------------+ >> | Property | Value | >> +------------------+--------------------------------------+ >> | checksum | 11a0609ef8c758fafc722529fbbbc487 | >> | container_format | bare | >> | created_at | 2018-02-17T02:46:33.000000 | >> | deleted | False | >> | deleted_at | None | >> | disk_format | qcow2 | >> | id | 4d3a32b1-2050-4986-9b93-306571ccaa1f | >> | is_public | False | >> | min_disk | 0 | >> | min_ram | 0 | >> | name | lda-image | >> | owner | None | >> | protected | False | >> | size | 534773760 | >> | status | active | >> | updated_at | 2018-02-17T02:46:37.000000 | >> | virtual_size | None | >> +------------------+--------------------------------------+ >> xsasgov at deimos:sles12.2 test-utils/cloud/cloud-tool master $ glance --os-user-domain-name Default --os-tenant-id 33b174b1f999445a9cc4090938f80704 --os-tenant-name lda --os-password rootroot --os-username xsasgov --os-project-name lda --os-auth-url http://192.168.122.186:5000/v3 image-list >> +----+------+-------------+------------------+------+--------+ >> | ID | Name | Disk Format | Container Format | Size | Status | >> +----+------+-------------+------------------+------+--------+ >> +----+------+-------------+------------------+------+--------+ >> >> >> However the image is there. If I login to the openstack server and use the openstack admin account to list the images then it is appears. So I'm wondering what I've missed when setting up the non-admin user? >> >> Thanks >> >> > >> _______________________________________________ >> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >> Post to : openstack at lists.openstack.org >> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack -------------- next part -------------- An HTML attachment was scrubbed... URL: From Remo at italy1.com Mon Feb 19 01:59:56 2018 From: Remo at italy1.com (Remo Mattei) Date: Sun, 18 Feb 2018 17:59:56 -0800 Subject: [Openstack] requirements on a user to create an image in glance In-Reply-To: References: Message-ID: <88D4E076-592B-4B71-A3FC-C3C39BD6A2A6@italy1.com> just in case here is one more page for sharing https://specs.openstack.org/openstack/glance-specs/specs/newton/approved/glance/community_visibility.html > On Feb 18, 2018, at 14:54, Sashan Govender wrote: > > If a user A who is part of project P creates an image then can user B who is part of project P as well see it and use it? > > On Sun, Feb 18, 2018 at 12:02 PM > wrote: > As a user you cannot make an image public only admin can. > > You can ask admin to make it public > You can share it with your other projects > >  dal mio iPhone X > > Il giorno 17 feb 2018, alle ore 00:49, Sashan Govender > ha scritto: > >> Hi >> >> What requirements must an openstack user meet to be able to create images in glance. I can create an image as a user as shown below but when I try to list the images as the same user it is not shown. >> >> xsasgov at deimos:sles12.2 test-utils/cloud/cloud-tool master $ glance --os-user-domain-name Default --os-tenant-id 33b174b1f999445a9cc4090938f80704 --os-tenant-name lda --os-password rootroot --os-username xsasgov --os-auth-url http://192.168.122.186:5000/v3 image-create --name lda-image --disk-format qcow2 --container-format bare --file testimage.qcow2 >> >> +------------------+--------------------------------------+ >> | Property | Value | >> +------------------+--------------------------------------+ >> | checksum | 11a0609ef8c758fafc722529fbbbc487 | >> | container_format | bare | >> | created_at | 2018-02-17T02:46:33.000000 | >> | deleted | False | >> | deleted_at | None | >> | disk_format | qcow2 | >> | id | 4d3a32b1-2050-4986-9b93-306571ccaa1f | >> | is_public | False | >> | min_disk | 0 | >> | min_ram | 0 | >> | name | lda-image | >> | owner | None | >> | protected | False | >> | size | 534773760 | >> | status | active | >> | updated_at | 2018-02-17T02:46:37.000000 | >> | virtual_size | None | >> +------------------+--------------------------------------+ >> xsasgov at deimos:sles12.2 test-utils/cloud/cloud-tool master $ glance --os-user-domain-name Default --os-tenant-id 33b174b1f999445a9cc4090938f80704 --os-tenant-name lda --os-password rootroot --os-username xsasgov --os-project-name lda --os-auth-url http://192.168.122.186:5000/v3 image-list >> +----+------+-------------+------------------+------+--------+ >> | ID | Name | Disk Format | Container Format | Size | Status | >> +----+------+-------------+------------------+------+--------+ >> +----+------+-------------+------------------+------+--------+ >> >> >> However the image is there. If I login to the openstack server and use the openstack admin account to list the images then it is appears. So I'm wondering what I've missed when setting up the non-admin user? >> >> Thanks >> >> > >> _______________________________________________ >> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >> Post to : openstack at lists.openstack.org >> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack -------------- next part -------------- An HTML attachment was scrubbed... URL: From ambadiaravind at gmail.com Mon Feb 19 06:11:38 2018 From: ambadiaravind at gmail.com (aRaviNd) Date: Mon, 19 Feb 2018 11:41:38 +0530 Subject: [Openstack] Openstack data replication In-Reply-To: References: Message-ID: Thanks John. You mentioned sync process in global clusters is more efficient. Could you please let me know how sync process is more efficient in global clusters than container sync? Aravind On Wed, Feb 14, 2018 at 9:10 PM, John Dickinson wrote: > A global cluster is one logical cluster that durably stores data across > all the available failure domains (the highest level of failure domain is > "region"). For example, if you have 2 regions (ie DCs)and you're using 4 > replicas, you'll end up with 2 replicas in each. > > Container sync is for taking a subset of data stored in one Swift cluster > and synchronizing it with a different Swift cluster. Each Swift cluster is > autonomous and handles it's own durability. So, eg if each Swift cluster > uses 3 replicas, you'll end up with 6x total storage for the data that is > synced. > > In most cases, people use global clusters and are happy with it. It's > definitely been more used than container sync, and the sync process in > global clusters is more efficient. > > However, deploying a multi-region Swift cluster comes with an extra set of > challenges above and beyond a single-site deployment. You've got to > consider more things with your inter-region networking, your network > routing, the access patterns in each region, your requirements around > locality, and the data placement of your data. > > All of these challenges are solvable, of course. Start with > https://swift.openstack.org and also feel free to ask here on the mailing > list or on freenode IRC in #openstack-swift. > > Good luck! > > John > > > On 14 Feb 2018, at 6:55, aRaviNd wrote: > > Hi All, > > Whats the difference between container sync and global cluster? Which > should we use for large data set of 100 Tb ? > > Aravind > > On Feb 13, 2018 7:52 PM, "aRaviNd" wrote: > > Hi All, > > We are working on implementing Openstack swift replication and would like > to know whats the better approach, container sync or global cluster, on > what scenario we should choose one above the another. > > Swift cluster will be used as a backend for web application deployed on > multiple regions which is configured as active passive using DNS. > > Data usage can grow upto 100TB starting with 1TB. What will be better > option to sync data between regions? > > Thank You > > Aravind M D > > > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/ > openstack > Post to : openstack at lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/ > openstack > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From danny.rotscher at tu-dresden.de Mon Feb 19 07:55:02 2018 From: danny.rotscher at tu-dresden.de (Danny Rotscher) Date: Mon, 19 Feb 2018 08:55:02 +0100 Subject: [Openstack] Newton - os update Message-ID: Hello, after we updated the os, we cannot create any new instance. We found out, that when ever we create a new vm, libvirt tries to start a vm with bus type iscsi instead of virtio. We also try to set the bus type in the image, but it does not work. Could anybody help me with that problem? Kind regards, Danny -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5151 bytes Desc: S/MIME Cryptographic Signature URL: From berndbausch at gmail.com Mon Feb 19 08:30:53 2018 From: berndbausch at gmail.com (Bernd Bausch) Date: Mon, 19 Feb 2018 17:30:53 +0900 Subject: [Openstack] Newton - os update In-Reply-To: References: Message-ID: <004901d3a95b$f6914530$e3b3cf90$@gmail.com> No solution, but a similar problem has been described here: https://ask.openstack.org/en/question/112521/how-to-launch-a-vm-that-use-devvda-and-busvirtio-using-nova/. -----Original Message----- From: Danny Rotscher [mailto:danny.rotscher at tu-dresden.de] Sent: Monday, February 19, 2018 4:55 PM To: openstack at lists.openstack.org Subject: [Openstack] Newton - os update Hello, after we updated the os, we cannot create any new instance. We found out, that when ever we create a new vm, libvirt tries to start a vm with bus type iscsi instead of virtio. We also try to set the bus type in the image, but it does not work. Could anybody help me with that problem? Kind regards, Danny -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5518 bytes Desc: not available URL: From danny.rotscher at tu-dresden.de Mon Feb 19 08:50:59 2018 From: danny.rotscher at tu-dresden.de (Danny Rotscher) Date: Mon, 19 Feb 2018 09:50:59 +0100 Subject: [Openstack] Newton - os update In-Reply-To: <004901d3a95b$f6914530$e3b3cf90$@gmail.com> References: <004901d3a95b$f6914530$e3b3cf90$@gmail.com> Message-ID: <41394745-4ab9-eed2-47b7-b2a70144fa48@tu-dresden.de> Thanks for the hint, it seems to be a really new problem. I will look a bit deeper into the nova code. Am 19.02.2018 um 09:30 schrieb Bernd Bausch: > No solution, but a similar problem has been described here: > https://ask.openstack.org/en/question/112521/how-to-launch-a-vm-that-use-devvda-and-busvirtio-using-nova/. > > -----Original Message----- > From: Danny Rotscher [mailto:danny.rotscher at tu-dresden.de] > Sent: Monday, February 19, 2018 4:55 PM > To: openstack at lists.openstack.org > Subject: [Openstack] Newton - os update > > Hello, > > after we updated the os, we cannot create any new instance. > We found out, that when ever we create a new vm, libvirt tries to start a vm > with bus type iscsi instead of virtio. > We also try to set the bus type in the image, but it does not work. > Could anybody help me with that problem? > > Kind regards, > Danny > -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5151 bytes Desc: S/MIME Cryptographic Signature URL: From mrhillsman at gmail.com Mon Feb 19 17:31:39 2018 From: mrhillsman at gmail.com (Melvin Hillsman) Date: Mon, 19 Feb 2018 11:31:39 -0600 Subject: [Openstack] User Committee Elections Message-ID: Hi everyone, We had to push the voting back a week if you have been keeping up with the UC elections[0]. That being said, election officials have sent out the poll and so voting is now open! Be sure to check out the candidates - https://goo.gl/x183he - and get your vote in before the poll closes. [0] https://governance.openstack.org/uc/reference/uc-election-feb2018.html -- Kind regards, Melvin Hillsman mrhillsman at gmail.com mobile: (832) 264-2646 -------------- next part -------------- An HTML attachment was scrubbed... URL: From gurud78 at gmail.com Tue Feb 20 10:26:25 2018 From: gurud78 at gmail.com (Guru Desai) Date: Tue, 20 Feb 2018 15:56:25 +0530 Subject: [Openstack] [openstack] [pike] Message-ID: Hi I am trying to install openstack pike and following the instruction mentioned in https://docs.openstack.org/keystone/pike/install/keystone-install-rdo.html for installing keystone. I am done with the environment setup. But after installing keystone, tried to create a project as mentioned in the guide. It shows below error. Not seeing any errors in the logs as such. Appreciate any inputs : openstack project create --domain default --description "Service Project" service Failed to discover available identity versions when contacting http://nagraj_controller:35357/v3/. Attempting to parse version from URL. Bad Request (HTTP 400) Guru -------------- next part -------------- An HTML attachment was scrubbed... URL: From ebiibe82 at gmail.com Tue Feb 20 11:13:45 2018 From: ebiibe82 at gmail.com (Amit Kumar) Date: Tue, 20 Feb 2018 16:43:45 +0530 Subject: [Openstack] [openstack][openstack-nova][openstack-operators] Query regarding LXC instantiation using nova Message-ID: Hello, I have a running OpenStack Ocata setup on which I am able to launch VMs. But I want to move to LXC instantiation instead of VMs. So, for this, I installed nova-compute-lxd on my compute node (Ubuntu 16.04). */etc/nova/nova-compute.conf* on my compute nodes was changed to contain the following values for *compute_driver* and* virt_type*. *[DEFAULT]* *compute_driver = lxd.LXDDriver* *[libvirt]* *virt_type = lxc* After this, I restarted the nova-compute service and launched an instance, launch failed after some time (4-5 mins remain in spawning state) and gives the following error: [Error: No valid host was found. There are not enough hosts available.]. Detailed nova-compute logs are attached with this e-mail. Could you please guide what else is required to launch container on OpenStack setup? What other configurations will I need to configure LXD and my nova user to see the LXD daemon. Regards, Amit -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: nova-compute.log Type: application/octet-stream Size: 17730 bytes Desc: not available URL: From bestofnithish at gmail.com Tue Feb 20 14:06:10 2018 From: bestofnithish at gmail.com (nithish B) Date: Tue, 20 Feb 2018 09:06:10 -0500 Subject: [Openstack] [openstack] [pike] In-Reply-To: References: Message-ID: Hi Guru, This looks more like a problem of finding the credentials. Please check if you sourced the credentials, and you did it right. A sample source parameters might look like the following: export OS_USERNAME=admin export OS_PASSWORD= export OS_TENANT_NAME=admin export OS_AUTH_URL=https://nagaraj_controller:5000/v3 Thanks. Regards, Nitish B. On Tue, Feb 20, 2018 at 5:26 AM, Guru Desai wrote: > Hi > > I am trying to install openstack pike and following the instruction > mentioned in https://docs.openstack.org/keystone/pike/install/ > keystone-install-rdo.html for installing keystone. I am done with the > environment setup. But after installing keystone, tried to create a project > as mentioned in the guide. It shows below error. Not seeing any errors in > the logs as such. Appreciate any inputs : > > openstack project create --domain default --description "Service Project" > service > > Failed to discover available identity versions when contacting > http://nagraj_controller:35357/v3/. Attempting to parse version from URL. > Bad Request (HTTP 400) > > Guru > > > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/ > openstack > Post to : openstack at lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/ > openstack > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From james.page at ubuntu.com Tue Feb 20 14:43:12 2018 From: james.page at ubuntu.com (James Page) Date: Tue, 20 Feb 2018 14:43:12 +0000 Subject: [Openstack] [nova] [nova-lxd] Query regarding LXC instantiation using nova In-Reply-To: References: Message-ID: Hi Amit (re-titled thread with scoped topics) As Matt has already referenced, [0] is a good starting place for using the nova-lxd driver. On Tue, 20 Feb 2018 at 11:13 Amit Kumar wrote: > Hello, > > I have a running OpenStack Ocata setup on which I am able to launch VMs. > But I want to move to LXC instantiation instead of VMs. So, for this, I > installed nova-compute-lxd on my compute node (Ubuntu 16.04). > */etc/nova/nova-compute.conf* on my compute nodes was changed to contain > the following values for *compute_driver* and* virt_type*. > > *[DEFAULT]* > *compute_driver = lxd.LXDDriver* > You only need the above part for nova-lxd (the below snippet is for the libvirt/lxc driver) > *[libvirt]* > *virt_type = lxc* > > After this, I restarted the nova-compute service and launched an instance, > launch failed after some time (4-5 mins remain in spawning state) and gives > the following error: > [Error: No valid host was found. There are not enough hosts available.]. Detailed > nova-compute logs are attached with this e-mail. > Looking at your logs, it would appear a VIF plugging timeout occurred; was your cloud functional with Libvirt/KVM before you made the switch to using nova-lxd? The neutron log files would be a good place to look so see what went wrong. Regards James [0] https://linuxcontainers.org/lxd/getting-started-openstack/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From gurud78 at gmail.com Tue Feb 20 15:34:29 2018 From: gurud78 at gmail.com (Guru Desai) Date: Tue, 20 Feb 2018 21:04:29 +0530 Subject: [Openstack] [openstack] [pike] In-Reply-To: References: Message-ID: Hi Nithish, That part is verified. Below is the snippet of the rc file export OS_USERNAME=admin export OS_PASSWORD=ADMIN_PASS export OS_PROJECT_NAME=admin export OS_USER_DOMAIN_NAME=Default export OS_PROJECT_DOMAIN_NAME=Default export OS_AUTH_URL=http://controller:35357/v3 export OS_IDENTITY_API_VERSION=3 [root at controller~]# openstack domain create --description "Default Domain" default Failed to discover available identity versions when contacting http://controller:35357/v3/. Attempting to parse version from URL. Bad Request (HTTP 400) On Tue, Feb 20, 2018 at 7:36 PM, nithish B wrote: > Hi Guru, > This looks more like a problem of finding the credentials. Please check if > you sourced the credentials, and you did it right. A sample source > parameters might look like the following: > > export OS_USERNAME=admin > export OS_PASSWORD= > export OS_TENANT_NAME=admin > export OS_AUTH_URL=https://nagaraj_controller:5000/v3 > > Thanks. > > > Regards, > Nitish B. > > On Tue, Feb 20, 2018 at 5:26 AM, Guru Desai wrote: > >> Hi >> >> I am trying to install openstack pike and following the instruction >> mentioned in https://docs.openstack.org/keystone/pike/install/keystone >> -install-rdo.html for installing keystone. I am done with the >> environment setup. But after installing keystone, tried to create a project >> as mentioned in the guide. It shows below error. Not seeing any errors in >> the logs as such. Appreciate any inputs : >> >> openstack project create --domain default --description "Service >> Project" service >> >> Failed to discover available identity versions when contacting >> http://nagraj_controller:35357/v3/. Attempting to parse version from URL. >> Bad Request (HTTP 400) >> >> Guru >> >> >> _______________________________________________ >> Mailing list: http://lists.openstack.org/cgi >> -bin/mailman/listinfo/openstack >> Post to : openstack at lists.openstack.org >> Unsubscribe : http://lists.openstack.org/cgi >> -bin/mailman/listinfo/openstack >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From emccormick at cirrusseven.com Tue Feb 20 15:58:53 2018 From: emccormick at cirrusseven.com (Erik McCormick) Date: Tue, 20 Feb 2018 10:58:53 -0500 Subject: [Openstack] [openstack] [pike] In-Reply-To: References: Message-ID: Did you run all the keystone-manage bootstrap commands? This looks like you're trying to create the domain you're supposed to be authenticating against. On Tue, Feb 20, 2018 at 10:34 AM, Guru Desai wrote: > Hi Nithish, > > That part is verified. Below is the snippet of the rc file > > export OS_USERNAME=admin > export OS_PASSWORD=ADMIN_PASS > export OS_PROJECT_NAME=admin > export OS_USER_DOMAIN_NAME=Default > export OS_PROJECT_DOMAIN_NAME=Default > export OS_AUTH_URL=http://controller:35357/v3 > export OS_IDENTITY_API_VERSION=3 > > > [root at controller~]# openstack domain create --description "Default Domain" > default > Failed to discover available identity versions when contacting > http://controller:35357/v3/. Attempting to parse version from URL. > Bad Request (HTTP 400) > > > On Tue, Feb 20, 2018 at 7:36 PM, nithish B wrote: >> >> Hi Guru, >> This looks more like a problem of finding the credentials. Please check if >> you sourced the credentials, and you did it right. A sample source >> parameters might look like the following: >> >> export OS_USERNAME=admin >> export OS_PASSWORD= >> export OS_TENANT_NAME=admin >> export OS_AUTH_URL=https://nagaraj_controller:5000/v3 >> >> Thanks. >> >> >> Regards, >> Nitish B. >> >> On Tue, Feb 20, 2018 at 5:26 AM, Guru Desai wrote: >>> >>> Hi >>> >>> I am trying to install openstack pike and following the instruction >>> mentioned in >>> https://docs.openstack.org/keystone/pike/install/keystone-install-rdo.html >>> for installing keystone. I am done with the environment setup. But after >>> installing keystone, tried to create a project as mentioned in the guide. It >>> shows below error. Not seeing any errors in the logs as such. Appreciate any >>> inputs : >>> >>> openstack project create --domain default --description "Service >>> Project" service >>> >>> Failed to discover available identity versions when contacting >>> http://nagraj_controller:35357/v3/. Attempting to parse version from URL. >>> Bad Request (HTTP 400) >>> >>> Guru >>> >>> >>> _______________________________________________ >>> Mailing list: >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >>> Post to : openstack at lists.openstack.org >>> Unsubscribe : >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >>> >> > > > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Post to : openstack at lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > From Remo at italy1.com Tue Feb 20 16:13:03 2018 From: Remo at italy1.com (Remo Mattei) Date: Tue, 20 Feb 2018 08:13:03 -0800 Subject: [Openstack] [openstack] [pike] In-Reply-To: References: Message-ID: <0B9EAEDC-77F1-4AB6-B8FC-5888F1DBFB3C@italy1.com> Why are you auth on the admin port? Try the default 5000? > On Feb 20, 2018, at 7:58 AM, Erik McCormick wrote: > > Did you run all the keystone-manage bootstrap commands? This looks > like you're trying to create the domain you're supposed to be > authenticating against. > > On Tue, Feb 20, 2018 at 10:34 AM, Guru Desai wrote: >> Hi Nithish, >> >> That part is verified. Below is the snippet of the rc file >> >> export OS_USERNAME=admin >> export OS_PASSWORD=ADMIN_PASS >> export OS_PROJECT_NAME=admin >> export OS_USER_DOMAIN_NAME=Default >> export OS_PROJECT_DOMAIN_NAME=Default >> export OS_AUTH_URL=http://controller:35357/v3 >> export OS_IDENTITY_API_VERSION=3 >> >> >> [root at controller~]# openstack domain create --description "Default Domain" >> default >> Failed to discover available identity versions when contacting >> http://controller:35357/v3/. Attempting to parse version from URL. >> Bad Request (HTTP 400) >> >> >> On Tue, Feb 20, 2018 at 7:36 PM, nithish B wrote: >>> >>> Hi Guru, >>> This looks more like a problem of finding the credentials. Please check if >>> you sourced the credentials, and you did it right. A sample source >>> parameters might look like the following: >>> >>> export OS_USERNAME=admin >>> export OS_PASSWORD= >>> export OS_TENANT_NAME=admin >>> export OS_AUTH_URL=https://nagaraj_controller:5000/v3 >>> >>> Thanks. >>> >>> >>> Regards, >>> Nitish B. >>> >>> On Tue, Feb 20, 2018 at 5:26 AM, Guru Desai wrote: >>>> >>>> Hi >>>> >>>> I am trying to install openstack pike and following the instruction >>>> mentioned in >>>> https://docs.openstack.org/keystone/pike/install/keystone-install-rdo.html >>>> for installing keystone. I am done with the environment setup. But after >>>> installing keystone, tried to create a project as mentioned in the guide. It >>>> shows below error. Not seeing any errors in the logs as such. Appreciate any >>>> inputs : >>>> >>>> openstack project create --domain default --description "Service >>>> Project" service >>>> >>>> Failed to discover available identity versions when contacting >>>> http://nagraj_controller:35357/v3/. Attempting to parse version from URL. >>>> Bad Request (HTTP 400) >>>> >>>> Guru >>>> >>>> >>>> _______________________________________________ >>>> Mailing list: >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >>>> Post to : openstack at lists.openstack.org >>>> Unsubscribe : >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >>>> >>> >> >> >> _______________________________________________ >> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >> Post to : openstack at lists.openstack.org >> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >> > > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Post to : openstack at lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack -------------- next part -------------- An HTML attachment was scrubbed... URL: From gurud78 at gmail.com Tue Feb 20 16:22:07 2018 From: gurud78 at gmail.com (Guru Desai) Date: Tue, 20 Feb 2018 21:52:07 +0530 Subject: [Openstack] [openstack] [pike] In-Reply-To: <0B9EAEDC-77F1-4AB6-B8FC-5888F1DBFB3C@italy1.com> References: <0B9EAEDC-77F1-4AB6-B8FC-5888F1DBFB3C@italy1.com> Message-ID: yes, did the bootstrap commands and everything went fine i.e no errors. admin port 35357 as mentioned in the pike install guide for keystone.. keystone-manage bootstrap --bootstrap-password ADMIN_PASS *\* --bootstrap-admin-url http://controller:35357/v3 *\* --bootstrap-internal-url http://controller:5000/v3 *\* --bootstrap-public-url http://controller:5000/v3 *\* --bootstrap-region-id RegionOne On Tue, Feb 20, 2018 at 9:43 PM, Remo Mattei wrote: > Why are you auth on the admin port? Try the default 5000? > > > On Feb 20, 2018, at 7:58 AM, Erik McCormick > wrote: > > Did you run all the keystone-manage bootstrap commands? This looks > like you're trying to create the domain you're supposed to be > authenticating against. > > On Tue, Feb 20, 2018 at 10:34 AM, Guru Desai wrote: > > Hi Nithish, > > That part is verified. Below is the snippet of the rc file > > export OS_USERNAME=admin > export OS_PASSWORD=ADMIN_PASS > export OS_PROJECT_NAME=admin > export OS_USER_DOMAIN_NAME=Default > export OS_PROJECT_DOMAIN_NAME=Default > export OS_AUTH_URL=http://controller:35357/v3 > export OS_IDENTITY_API_VERSION=3 > > > [root at controller~]# openstack domain create --description "Default Domain" > default > Failed to discover available identity versions when contacting > http://controller:35357/v3/. Attempting to parse version from URL. > Bad Request (HTTP 400) > > > On Tue, Feb 20, 2018 at 7:36 PM, nithish B > wrote: > > > Hi Guru, > This looks more like a problem of finding the credentials. Please check if > you sourced the credentials, and you did it right. A sample source > parameters might look like the following: > > export OS_USERNAME=admin > export OS_PASSWORD= > export OS_TENANT_NAME=admin > export OS_AUTH_URL=https://nagaraj_controller:5000/v3 > > Thanks. > > > Regards, > Nitish B. > > On Tue, Feb 20, 2018 at 5:26 AM, Guru Desai wrote: > > > Hi > > I am trying to install openstack pike and following the instruction > mentioned in > https://docs.openstack.org/keystone/pike/install/keystone-install-rdo.html > for installing keystone. I am done with the environment setup. But after > installing keystone, tried to create a project as mentioned in the guide. > It > shows below error. Not seeing any errors in the logs as such. Appreciate > any > inputs : > > openstack project create --domain default --description "Service > Project" service > > Failed to discover available identity versions when contacting > http://nagraj_controller:35357/v3/. Attempting to parse version from URL. > Bad Request (HTTP 400) > > Guru > > > _______________________________________________ > Mailing list: > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Post to : openstack at lists.openstack.org > Unsubscribe : > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > > > > > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/ > openstack > Post to : openstack at lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/ > openstack > > > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/ > openstack > Post to : openstack at lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/ > openstack > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ambadiaravind at gmail.com Tue Feb 20 16:32:01 2018 From: ambadiaravind at gmail.com (aRaviNd) Date: Tue, 20 Feb 2018 22:02:01 +0530 Subject: [Openstack] Openstack data replication In-Reply-To: References: Message-ID: Hi All, Any reference or information with regards to how sync process in global clusters is more efficient than container sync in openstack swift. Aravind M D On Mon, Feb 19, 2018 at 11:41 AM, aRaviNd wrote: > Thanks John. > > You mentioned sync process in global clusters is more efficient. Could you > please let me know how sync process is more efficient in global clusters > than container sync? > > Aravind > > On Wed, Feb 14, 2018 at 9:10 PM, John Dickinson wrote: > >> A global cluster is one logical cluster that durably stores data across >> all the available failure domains (the highest level of failure domain is >> "region"). For example, if you have 2 regions (ie DCs)and you're using 4 >> replicas, you'll end up with 2 replicas in each. >> >> Container sync is for taking a subset of data stored in one Swift cluster >> and synchronizing it with a different Swift cluster. Each Swift cluster is >> autonomous and handles it's own durability. So, eg if each Swift cluster >> uses 3 replicas, you'll end up with 6x total storage for the data that is >> synced. >> >> In most cases, people use global clusters and are happy with it. It's >> definitely been more used than container sync, and the sync process in >> global clusters is more efficient. >> >> However, deploying a multi-region Swift cluster comes with an extra set >> of challenges above and beyond a single-site deployment. You've got to >> consider more things with your inter-region networking, your network >> routing, the access patterns in each region, your requirements around >> locality, and the data placement of your data. >> >> All of these challenges are solvable, of course. Start with >> https://swift.openstack.org and also feel free to ask here on the >> mailing list or on freenode IRC in #openstack-swift. >> >> Good luck! >> >> John >> >> >> On 14 Feb 2018, at 6:55, aRaviNd wrote: >> >> Hi All, >> >> Whats the difference between container sync and global cluster? Which >> should we use for large data set of 100 Tb ? >> >> Aravind >> >> On Feb 13, 2018 7:52 PM, "aRaviNd" wrote: >> >> Hi All, >> >> We are working on implementing Openstack swift replication and would like >> to know whats the better approach, container sync or global cluster, on >> what scenario we should choose one above the another. >> >> Swift cluster will be used as a backend for web application deployed on >> multiple regions which is configured as active passive using DNS. >> >> Data usage can grow upto 100TB starting with 1TB. What will be better >> option to sync data between regions? >> >> Thank You >> >> Aravind M D >> >> >> _______________________________________________ >> Mailing list: http://lists.openstack.org/cgi >> -bin/mailman/listinfo/openstack >> Post to : openstack at lists.openstack.org >> Unsubscribe : http://lists.openstack.org/cgi >> -bin/mailman/listinfo/openstack >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From emccormick at cirrusseven.com Tue Feb 20 16:40:46 2018 From: emccormick at cirrusseven.com (Erik McCormick) Date: Tue, 20 Feb 2018 11:40:46 -0500 Subject: [Openstack] [openstack] [pike] In-Reply-To: References: <0B9EAEDC-77F1-4AB6-B8FC-5888F1DBFB3C@italy1.com> Message-ID: According to your bootstrap and auth file, you're using http://controller:35357/v3, but the error you posted said http://nagraj_controller:35357/v3/. Run this: openstack --debug endpoint list Paste the output in here. -Erik On Tue, Feb 20, 2018 at 11:22 AM, Guru Desai wrote: > yes, did the bootstrap commands and everything went fine i.e no errors. > admin port 35357 as mentioned in the pike install guide for keystone.. > > keystone-manage bootstrap --bootstrap-password ADMIN_PASS \ > > --bootstrap-admin-url http://controller:35357/v3 \ > > --bootstrap-internal-url http://controller:5000/v3 \ > > --bootstrap-public-url http://controller:5000/v3 \ > > --bootstrap-region-id RegionOne > > > > On Tue, Feb 20, 2018 at 9:43 PM, Remo Mattei wrote: >> >> Why are you auth on the admin port? Try the default 5000? >> >> >> On Feb 20, 2018, at 7:58 AM, Erik McCormick >> wrote: >> >> Did you run all the keystone-manage bootstrap commands? This looks >> like you're trying to create the domain you're supposed to be >> authenticating against. >> >> On Tue, Feb 20, 2018 at 10:34 AM, Guru Desai wrote: >> >> Hi Nithish, >> >> That part is verified. Below is the snippet of the rc file >> >> export OS_USERNAME=admin >> export OS_PASSWORD=ADMIN_PASS >> export OS_PROJECT_NAME=admin >> export OS_USER_DOMAIN_NAME=Default >> export OS_PROJECT_DOMAIN_NAME=Default >> export OS_AUTH_URL=http://controller:35357/v3 >> export OS_IDENTITY_API_VERSION=3 >> >> >> [root at controller~]# openstack domain create --description "Default Domain" >> default >> Failed to discover available identity versions when contacting >> http://controller:35357/v3/. Attempting to parse version from URL. >> Bad Request (HTTP 400) >> >> >> On Tue, Feb 20, 2018 at 7:36 PM, nithish B >> wrote: >> >> >> Hi Guru, >> This looks more like a problem of finding the credentials. Please check if >> you sourced the credentials, and you did it right. A sample source >> parameters might look like the following: >> >> export OS_USERNAME=admin >> export OS_PASSWORD= >> export OS_TENANT_NAME=admin >> export OS_AUTH_URL=https://nagaraj_controller:5000/v3 >> >> Thanks. >> >> >> Regards, >> Nitish B. >> >> On Tue, Feb 20, 2018 at 5:26 AM, Guru Desai wrote: >> >> >> Hi >> >> I am trying to install openstack pike and following the instruction >> mentioned in >> https://docs.openstack.org/keystone/pike/install/keystone-install-rdo.html >> for installing keystone. I am done with the environment setup. But after >> installing keystone, tried to create a project as mentioned in the guide. >> It >> shows below error. Not seeing any errors in the logs as such. Appreciate >> any >> inputs : >> >> openstack project create --domain default --description "Service >> Project" service >> >> Failed to discover available identity versions when contacting >> http://nagraj_controller:35357/v3/. Attempting to parse version from URL. >> Bad Request (HTTP 400) >> >> Guru >> >> >> _______________________________________________ >> Mailing list: >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >> Post to : openstack at lists.openstack.org >> Unsubscribe : >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >> >> >> >> >> _______________________________________________ >> Mailing list: >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >> Post to : openstack at lists.openstack.org >> Unsubscribe : >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >> >> >> _______________________________________________ >> Mailing list: >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >> Post to : openstack at lists.openstack.org >> Unsubscribe : >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >> >> > From gurud78 at gmail.com Tue Feb 20 16:54:25 2018 From: gurud78 at gmail.com (Guru Desai) Date: Tue, 20 Feb 2018 22:24:25 +0530 Subject: [Openstack] [openstack] [pike] In-Reply-To: References: <0B9EAEDC-77F1-4AB6-B8FC-5888F1DBFB3C@italy1.com> Message-ID: Here you go with the output : [root at controller ~]# openstack --debug endpoint list START with options: [u'--debug', u'endpoint', u'list'] options: Namespace(access_key='', access_secret='***', access_token='***', access_token_endpoint='', access_token_type='', auth_type='', auth_url=' http://controller:35357/v3/', cacert=None, cert='', client_id='', client_secret='***', cloud='', code='', consumer_key='', consumer_secret='***', debug=True, default_domain='default', default_domain_id='', default_domain_name='', deferred_help=False, discovery_endpoint='', domain_id='', domain_name='', endpoint='', identity_provider='', identity_provider_url='', insecure=None, interface='', key='', log_file=None, openid_scope='', os_beta_command=False, os_compute_api_version='', os_identity_api_version='3', os_image_api_version='', os_network_api_version='', os_object_api_version='', os_project_id=None, os_project_name=None, os_volume_api_version='', passcode='', password='***', profile='', project_domain_id='', project_domain_name='Default', project_id='', project_name='admin', protocol='', redirect_uri='', region_name='', service_provider_endpoint='', service_provider_entity_id='', timing=False, token='***', trust_id='', url='', user_domain_id='', user_domain_name='Default', user_id='', username='admin', verbose_level=3, verify=None) Auth plugin password selected auth_config_hook(): {'auth_type': 'password', 'beta_command': False, u'compute_api_version': u'2', 'key': None, u'database_api_version': u'1.0', u'metering_api_version': u'2', 'auth_url': 'http://controller:35357/v3/', u'network_api_version': u'2', u'message': u'', u'image_format': u'qcow2', 'networks': [], u'image_api_version': u'2', 'verify': True, u'dns_api_version': u'2', u'object_store_api_version': u'1', 'username': 'admin', u'container_infra_api_version': u'1', 'verbose_level': 3, 'region_name': '', 'api_timeout': None, u'baremetal_api_version': u'1', 'auth': {'user_domain_name': 'Default', 'project_name': 'admin', 'project_domain_name': 'Default'}, 'default_domain': 'default', u'container_api_version': u'1', u'image_api_use_tasks': False, u'floating_ip_source': u'neutron', u'orchestration_api_version': u'1', 'timing': False, 'password': '***', u'application_catalog_api_version': u'1', 'cacert': None, u'key_manager_api_version': u'v1', u'workflow_api_version': u'2', 'deferred_help': False, u'identity_api_version': '3', u'volume_api_version': u'2', 'cert': None, u'secgroup_source': u'neutron', u'status': u'active', 'debug': True, u'interface': None, u'disable_vendor_agent': {}} defaults: {u'auth_type': 'password', u'status': u'active', u'compute_api_version': u'2', 'key': None, u'database_api_version': u'1.0', 'api_timeout': None, u'baremetal_api_version': u'1', u'image_api_version': u'2', u'container_infra_api_version': u'1', u'metering_api_version': u'2', u'image_api_use_tasks': False, u'floating_ip_source': u'neutron', u'orchestration_api_version': u'1', 'cacert': None, u'network_api_version': u'2', u'message': u'', u'image_format': u'qcow2', u'application_catalog_api_version': u'1', u'key_manager_api_version': u'v1', u'workflow_api_version': u'2', 'verify': True, u'identity_api_version': u'2.0', u'volume_api_version': u'2', 'cert': None, u'secgroup_source': u'neutron', u'container_api_version': u'1', u'dns_api_version': u'2', u'object_store_api_version': u'1', u'interface': None, u'disable_vendor_agent': {}} cloud cfg: {'auth_type': 'password', 'beta_command': False, u'compute_api_version': u'2', 'key': None, u'database_api_version': u'1.0', u'metering_api_version': u'2', 'auth_url': 'http://controller:35357/v3/', u'network_api_version': u'2', u'message': u'', u'image_format': u'qcow2', 'networks': [], u'image_api_version': u'2', 'verify': True, u'dns_api_version': u'2', u'object_store_api_version': u'1', 'username': 'admin', u'container_infra_api_version': u'1', 'verbose_level': 3, 'region_name': '', 'api_timeout': None, u'baremetal_api_version': u'1', 'auth': {'user_domain_name': 'Default', 'project_name': 'admin', 'project_domain_name': 'Default'}, 'default_domain': 'default', u'container_api_version': u'1', u'image_api_use_tasks': False, u'floating_ip_source': u'neutron', u'orchestration_api_version': u'1', 'timing': False, 'password': '***', u'application_catalog_api_version': u'1', 'cacert': None, u'key_manager_api_version': u'v1', u'workflow_api_version': u'2', 'deferred_help': False, u'identity_api_version': '3', u'volume_api_version': u'2', 'cert': None, u'secgroup_source': u'neutron', u'status': u'active', 'debug': True, u'interface': None, u'disable_vendor_agent': {}} compute API version 2, cmd group openstack.compute.v2 network API version 2, cmd group openstack.network.v2 image API version 2, cmd group openstack.image.v2 volume API version 2, cmd group openstack.volume.v2 identity API version 3, cmd group openstack.identity.v3 object_store API version 1, cmd group openstack.object_store.v1 neutronclient API version 2, cmd group openstack.neutronclient.v2 Auth plugin password selected auth_config_hook(): {'auth_type': 'password', 'beta_command': False, u'compute_api_version': u'2', 'key': None, u'database_api_version': u'1.0', u'metering_api_version': u'2', 'auth_url': 'http://controller:35357/v3/', u'network_api_version': u'2', u'message': u'', u'image_format': u'qcow2', 'networks': [], u'image_api_version': u'2', 'verify': True, u'dns_api_version': u'2', u'object_store_api_version': u'1', 'username': 'admin', u'container_infra_api_version': u'1', 'verbose_level': 3, 'region_name': '', 'api_timeout': None, u'baremetal_api_version': u'1', 'auth': {'user_domain_name': 'Default', 'project_name': 'admin', 'project_domain_name': 'Default'}, 'default_domain': 'default', u'container_api_version': u'1', u'image_api_use_tasks': False, u'floating_ip_source': u'neutron', u'orchestration_api_version': u'1', 'timing': False, 'password': '***', u'application_catalog_api_version': u'1', 'cacert': None, u'key_manager_api_version': u'v1', u'workflow_api_version': u'2', 'deferred_help': False, u'identity_api_version': '3', u'volume_api_version': u'2', 'cert': None, u'secgroup_source': u'neutron', u'status': u'active', 'debug': True, u'interface': None, u'disable_vendor_agent': {}} Auth plugin password selected auth_config_hook(): {'auth_type': 'password', 'beta_command': False, u'compute_api_version': u'2', 'key': None, u'database_api_version': u'1.0', u'metering_api_version': u'2', 'auth_url': 'http://controller:35357/v3/', u'network_api_version': u'2', u'message': u'', u'image_format': u'qcow2', 'networks': [], u'image_api_version': u'2', 'verify': True, u'dns_api_version': u'2', u'object_store_api_version': u'1', 'username': 'admin', u'container_infra_api_version': u'1', 'verbose_level': 3, 'region_name': '', 'api_timeout': None, u'baremetal_api_version': u'1', 'auth': {'user_domain_name': 'Default', 'project_name': 'admin', 'project_domain_name': 'Default'}, 'default_domain': 'default', u'container_api_version': u'1', u'image_api_use_tasks': False, u'floating_ip_source': u'neutron', u'orchestration_api_version': u'1', 'timing': False, 'password': '***', u'application_catalog_api_version': u'1', 'cacert': None, u'key_manager_api_version': u'v1', u'workflow_api_version': u'2', 'deferred_help': False, u'identity_api_version': '3', u'volume_api_version': u'2', 'cert': None, u'secgroup_source': u'neutron', u'status': u'active', 'debug': True, u'interface': None, u'disable_vendor_agent': {}} command: endpoint list -> openstackclient.identity.v3.endpoint.ListEndpoint (auth=True) Auth plugin password selected auth_config_hook(): {'auth_type': 'password', 'beta_command': False, u'compute_api_version': u'2', 'key': None, u'database_api_version': u'1.0', u'metering_api_version': u'2', 'auth_url': 'http://controller:35357/v3/', u'network_api_version': u'2', u'message': u'', u'image_format': u'qcow2', 'networks': [], u'image_api_version': u'2', 'verify': True, u'dns_api_version': u'2', u'object_store_api_version': u'1', 'username': 'admin', u'container_infra_api_version': u'1', 'verbose_level': 3, 'region_name': '', 'api_timeout': None, u'baremetal_api_version': u'1', 'auth': {'user_domain_name': 'Default', 'project_name': 'admin', 'project_domain_name': 'Default'}, 'default_domain': 'default', u'container_api_version': u'1', u'image_api_use_tasks': False, u'floating_ip_source': u'neutron', u'orchestration_api_version': u'1', 'timing': False, 'password': '***', u'application_catalog_api_version': u'1', 'cacert': None, u'key_manager_api_version': u'v1', u'workflow_api_version': u'2', 'deferred_help': False, u'identity_api_version': '3', u'volume_api_version': u'2', 'cert': None, u'secgroup_source': u'neutron', u'status': u'active', 'debug': True, u'interface': None, u'disable_vendor_agent': {}} Using auth plugin: password Using parameters {'username': 'admin', 'project_name': 'admin', 'user_domain_name': 'Default', 'auth_url': 'http://controller:35357/v3/', 'password': '***', 'project_domain_name': 'Default'} Get auth_ref REQ: curl -g -i -X GET http://controller:35357/v3/ -H "Accept: application/json" -H "User-Agent: osc-lib/1.7.0 keystoneauth1/3.1.0 python-requests/2.14.2 CPython/2.7.5" Starting new HTTP connection (1): controller http://controller:35357 "GET /v3/ HTTP/1.1" 400 347 RESP: [400] Date: Tue, 20 Feb 2018 16:50:08 GMT Server: Apache/2.4.6 (CentOS) mod_wsgi/3.4 Python/2.7.5 Content-Length: 347 Connection: close Content-Type: text/html; charset=iso-8859-1 RESP BODY: Omitted, Content-Type is set to text/html; charset=iso-8859-1. Only application/json responses have their bodies logged. Request returned failure status: 400 Failed to discover available identity versions when contacting http://controller:35357/v3/. Attempting to parse version from URL. Making authentication request to http://controller:35357/v3/auth/tokens Resetting dropped connection: controller http://controller:35357 "POST /v3/auth/tokens HTTP/1.1" 400 347 Request returned failure status: 400 Bad Request (HTTP 400) Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/cliff/app.py", line 393, in run_subcommand self.prepare_to_run_command(cmd) File "/usr/lib/python2.7/site-packages/openstackclient/shell.py", line 200, in prepare_to_run_command return super(OpenStackShell, self).prepare_to_run_command(cmd) File "/usr/lib/python2.7/site-packages/osc_lib/shell.py", line 437, in prepare_to_run_command self.client_manager.auth_ref File "/usr/lib/python2.7/site-packages/openstackclient/common/clientmanager.py", line 99, in auth_ref return super(ClientManager, self).auth_ref File "/usr/lib/python2.7/site-packages/osc_lib/clientmanager.py", line 239, in auth_ref self._auth_ref = self.auth.get_auth_ref(self.session) File "/usr/lib/python2.7/site-packages/keystoneauth1/identity/generic/base.py", line 198, in get_auth_ref return self._plugin.get_auth_ref(session, **kwargs) File "/usr/lib/python2.7/site-packages/keystoneauth1/identity/v3/base.py", line 167, in get_auth_ref authenticated=False, log=False, **rkwargs) File "/usr/lib/python2.7/site-packages/keystoneauth1/session.py", line 853, in post return self.request(url, 'POST', **kwargs) File "/usr/lib/python2.7/site-packages/osc_lib/session.py", line 40, in request resp = super(TimingSession, self).request(url, method, **kwargs) File "/usr/lib/python2.7/site-packages/positional/__init__.py", line 101, in inner return wrapped(*args, **kwargs) File "/usr/lib/python2.7/site-packages/keystoneauth1/session.py", line 742, in request raise exceptions.from_response(resp, method, url) BadRequest: Bad Request (HTTP 400) clean_up ListEndpoint: Bad Request (HTTP 400) Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/osc_lib/shell.py", line 134, in run ret_val = super(OpenStackShell, self).run(argv) File "/usr/lib/python2.7/site-packages/cliff/app.py", line 279, in run result = self.run_subcommand(remainder) File "/usr/lib/python2.7/site-packages/osc_lib/shell.py", line 169, in run_subcommand ret_value = super(OpenStackShell, self).run_subcommand(argv) File "/usr/lib/python2.7/site-packages/cliff/app.py", line 393, in run_subcommand self.prepare_to_run_command(cmd) File "/usr/lib/python2.7/site-packages/openstackclient/shell.py", line 200, in prepare_to_run_command return super(OpenStackShell, self).prepare_to_run_command(cmd) File "/usr/lib/python2.7/site-packages/osc_lib/shell.py", line 437, in prepare_to_run_command self.client_manager.auth_ref File "/usr/lib/python2.7/site-packages/openstackclient/common/clientmanager.py", line 99, in auth_ref return super(ClientManager, self).auth_ref File "/usr/lib/python2.7/site-packages/osc_lib/clientmanager.py", line 239, in auth_ref self._auth_ref = self.auth.get_auth_ref(self.session) File "/usr/lib/python2.7/site-packages/keystoneauth1/identity/generic/base.py", line 198, in get_auth_ref return self._plugin.get_auth_ref(session, **kwargs) File "/usr/lib/python2.7/site-packages/keystoneauth1/identity/v3/base.py", line 167, in get_auth_ref authenticated=False, log=False, **rkwargs) File "/usr/lib/python2.7/site-packages/keystoneauth1/session.py", line 853, in post return self.request(url, 'POST', **kwargs) File "/usr/lib/python2.7/site-packages/osc_lib/session.py", line 40, in request resp = super(TimingSession, self).request(url, method, **kwargs) File "/usr/lib/python2.7/site-packages/positional/__init__.py", line 101, in inner return wrapped(*args, **kwargs) File "/usr/lib/python2.7/site-packages/keystoneauth1/session.py", line 742, in request raise exceptions.from_response(resp, method, url) BadRequest: Bad Request (HTTP 400) END return value: 1 [root at controller ~]# On Tue, Feb 20, 2018 at 10:10 PM, Erik McCormick wrote: > According to your bootstrap and auth file, you're using > http://controller:35357/v3, but the error you posted said > http://nagraj_controller:35357/v3/. > > Run this: > > openstack --debug endpoint list > > Paste the output in here. > > -Erik > > On Tue, Feb 20, 2018 at 11:22 AM, Guru Desai wrote: > > yes, did the bootstrap commands and everything went fine i.e no errors. > > admin port 35357 as mentioned in the pike install guide for keystone.. > > > > keystone-manage bootstrap --bootstrap-password ADMIN_PASS \ > > > > --bootstrap-admin-url http://controller:35357/v3 \ > > > > --bootstrap-internal-url http://controller:5000/v3 \ > > > > --bootstrap-public-url http://controller:5000/v3 \ > > > > --bootstrap-region-id RegionOne > > > > > > > > On Tue, Feb 20, 2018 at 9:43 PM, Remo Mattei wrote: > >> > >> Why are you auth on the admin port? Try the default 5000? > >> > >> > >> On Feb 20, 2018, at 7:58 AM, Erik McCormick > > >> wrote: > >> > >> Did you run all the keystone-manage bootstrap commands? This looks > >> like you're trying to create the domain you're supposed to be > >> authenticating against. > >> > >> On Tue, Feb 20, 2018 at 10:34 AM, Guru Desai wrote: > >> > >> Hi Nithish, > >> > >> That part is verified. Below is the snippet of the rc file > >> > >> export OS_USERNAME=admin > >> export OS_PASSWORD=ADMIN_PASS > >> export OS_PROJECT_NAME=admin > >> export OS_USER_DOMAIN_NAME=Default > >> export OS_PROJECT_DOMAIN_NAME=Default > >> export OS_AUTH_URL=http://controller:35357/v3 > >> export OS_IDENTITY_API_VERSION=3 > >> > >> > >> [root at controller~]# openstack domain create --description "Default > Domain" > >> default > >> Failed to discover available identity versions when contacting > >> http://controller:35357/v3/. Attempting to parse version from URL. > >> Bad Request (HTTP 400) > >> > >> > >> On Tue, Feb 20, 2018 at 7:36 PM, nithish B > >> wrote: > >> > >> > >> Hi Guru, > >> This looks more like a problem of finding the credentials. Please check > if > >> you sourced the credentials, and you did it right. A sample source > >> parameters might look like the following: > >> > >> export OS_USERNAME=admin > >> export OS_PASSWORD= > >> export OS_TENANT_NAME=admin > >> export OS_AUTH_URL=https://nagaraj_controller:5000/v3 > >> > >> Thanks. > >> > >> > >> Regards, > >> Nitish B. > >> > >> On Tue, Feb 20, 2018 at 5:26 AM, Guru Desai wrote: > >> > >> > >> Hi > >> > >> I am trying to install openstack pike and following the instruction > >> mentioned in > >> https://docs.openstack.org/keystone/pike/install/ > keystone-install-rdo.html > >> for installing keystone. I am done with the environment setup. But after > >> installing keystone, tried to create a project as mentioned in the > guide. > >> It > >> shows below error. Not seeing any errors in the logs as such. Appreciate > >> any > >> inputs : > >> > >> openstack project create --domain default --description "Service > >> Project" service > >> > >> Failed to discover available identity versions when contacting > >> http://nagraj_controller:35357/v3/. Attempting to parse version from > URL. > >> Bad Request (HTTP 400) > >> > >> Guru > >> > >> > >> _______________________________________________ > >> Mailing list: > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > >> Post to : openstack at lists.openstack.org > >> Unsubscribe : > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > >> > >> > >> > >> > >> _______________________________________________ > >> Mailing list: > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > >> Post to : openstack at lists.openstack.org > >> Unsubscribe : > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > >> > >> > >> _______________________________________________ > >> Mailing list: > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > >> Post to : openstack at lists.openstack.org > >> Unsubscribe : > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > >> > >> > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bestofnithish at gmail.com Tue Feb 20 17:26:58 2018 From: bestofnithish at gmail.com (nithish B) Date: Tue, 20 Feb 2018 12:26:58 -0500 Subject: [Openstack] [openstack] [pike] In-Reply-To: References: <0B9EAEDC-77F1-4AB6-B8FC-5888F1DBFB3C@italy1.com> Message-ID: Wait. Why does it say curl -g -i -X GET http://controller:35357/v3/ -H "Accept: application/json" - shouldn't it be your "http://nagraj_controller: 35357/v3/ ."? Regards, Nitish B. On Tue, Feb 20, 2018 at 11:54 AM, Guru Desai wrote: > Here you go with the output : > > > [root at controller ~]# openstack --debug endpoint list > START with options: [u'--debug', u'endpoint', u'list'] > options: Namespace(access_key='', access_secret='***', access_token='***', > access_token_endpoint='', access_token_type='', auth_type='', auth_url=' > http://controller:35357/v3/', cacert=None, cert='', client_id='', > client_secret='***', cloud='', code='', consumer_key='', > consumer_secret='***', debug=True, default_domain='default', > default_domain_id='', default_domain_name='', deferred_help=False, > discovery_endpoint='', domain_id='', domain_name='', endpoint='', > identity_provider='', identity_provider_url='', insecure=None, > interface='', key='', log_file=None, openid_scope='', > os_beta_command=False, os_compute_api_version='', > os_identity_api_version='3', os_image_api_version='', > os_network_api_version='', os_object_api_version='', os_project_id=None, > os_project_name=None, os_volume_api_version='', passcode='', > password='***', profile='', project_domain_id='', > project_domain_name='Default', project_id='', project_name='admin', > protocol='', redirect_uri='', region_name='', service_provider_endpoint='', > service_provider_entity_id='', timing=False, token='***', trust_id='', > url='', user_domain_id='', user_domain_name='Default', user_id='', > username='admin', verbose_level=3, verify=None) > Auth plugin password selected > auth_config_hook(): {'auth_type': 'password', 'beta_command': False, > u'compute_api_version': u'2', 'key': None, u'database_api_version': u'1.0', > u'metering_api_version': u'2', 'auth_url': 'http://controller:35357/v3/', > u'network_api_version': u'2', u'message': u'', u'image_format': u'qcow2', > 'networks': [], u'image_api_version': u'2', 'verify': True, > u'dns_api_version': u'2', u'object_store_api_version': u'1', 'username': > 'admin', u'container_infra_api_version': u'1', 'verbose_level': 3, > 'region_name': '', 'api_timeout': None, u'baremetal_api_version': u'1', > 'auth': {'user_domain_name': 'Default', 'project_name': 'admin', > 'project_domain_name': 'Default'}, 'default_domain': 'default', > u'container_api_version': u'1', u'image_api_use_tasks': False, > u'floating_ip_source': u'neutron', u'orchestration_api_version': u'1', > 'timing': False, 'password': '***', u'application_catalog_api_version': > u'1', 'cacert': None, u'key_manager_api_version': u'v1', > u'workflow_api_version': u'2', 'deferred_help': False, > u'identity_api_version': '3', u'volume_api_version': u'2', 'cert': None, > u'secgroup_source': u'neutron', u'status': u'active', 'debug': True, > u'interface': None, u'disable_vendor_agent': {}} > defaults: {u'auth_type': 'password', u'status': u'active', > u'compute_api_version': u'2', 'key': None, u'database_api_version': u'1.0', > 'api_timeout': None, u'baremetal_api_version': u'1', u'image_api_version': > u'2', u'container_infra_api_version': u'1', u'metering_api_version': > u'2', u'image_api_use_tasks': False, u'floating_ip_source': u'neutron', > u'orchestration_api_version': u'1', 'cacert': None, u'network_api_version': > u'2', u'message': u'', u'image_format': u'qcow2', u'application_catalog_api_version': > u'1', u'key_manager_api_version': u'v1', u'workflow_api_version': u'2', > 'verify': True, u'identity_api_version': u'2.0', u'volume_api_version': > u'2', 'cert': None, u'secgroup_source': u'neutron', > u'container_api_version': u'1', u'dns_api_version': u'2', > u'object_store_api_version': u'1', u'interface': None, > u'disable_vendor_agent': {}} > cloud cfg: {'auth_type': 'password', 'beta_command': False, > u'compute_api_version': u'2', 'key': None, u'database_api_version': u'1.0', > u'metering_api_version': u'2', 'auth_url': 'http://controller:35357/v3/', > u'network_api_version': u'2', u'message': u'', u'image_format': u'qcow2', > 'networks': [], u'image_api_version': u'2', 'verify': True, > u'dns_api_version': u'2', u'object_store_api_version': u'1', 'username': > 'admin', u'container_infra_api_version': u'1', 'verbose_level': 3, > 'region_name': '', 'api_timeout': None, u'baremetal_api_version': u'1', > 'auth': {'user_domain_name': 'Default', 'project_name': 'admin', > 'project_domain_name': 'Default'}, 'default_domain': 'default', > u'container_api_version': u'1', u'image_api_use_tasks': False, > u'floating_ip_source': u'neutron', u'orchestration_api_version': u'1', > 'timing': False, 'password': '***', u'application_catalog_api_version': > u'1', 'cacert': None, u'key_manager_api_version': u'v1', > u'workflow_api_version': u'2', 'deferred_help': False, > u'identity_api_version': '3', u'volume_api_version': u'2', 'cert': None, > u'secgroup_source': u'neutron', u'status': u'active', 'debug': True, > u'interface': None, u'disable_vendor_agent': {}} > compute API version 2, cmd group openstack.compute.v2 > network API version 2, cmd group openstack.network.v2 > image API version 2, cmd group openstack.image.v2 > volume API version 2, cmd group openstack.volume.v2 > identity API version 3, cmd group openstack.identity.v3 > object_store API version 1, cmd group openstack.object_store.v1 > neutronclient API version 2, cmd group openstack.neutronclient.v2 > Auth plugin password selected > auth_config_hook(): {'auth_type': 'password', 'beta_command': False, > u'compute_api_version': u'2', 'key': None, u'database_api_version': u'1.0', > u'metering_api_version': u'2', 'auth_url': 'http://controller:35357/v3/', > u'network_api_version': u'2', u'message': u'', u'image_format': u'qcow2', > 'networks': [], u'image_api_version': u'2', 'verify': True, > u'dns_api_version': u'2', u'object_store_api_version': u'1', 'username': > 'admin', u'container_infra_api_version': u'1', 'verbose_level': 3, > 'region_name': '', 'api_timeout': None, u'baremetal_api_version': u'1', > 'auth': {'user_domain_name': 'Default', 'project_name': 'admin', > 'project_domain_name': 'Default'}, 'default_domain': 'default', > u'container_api_version': u'1', u'image_api_use_tasks': False, > u'floating_ip_source': u'neutron', u'orchestration_api_version': u'1', > 'timing': False, 'password': '***', u'application_catalog_api_version': > u'1', 'cacert': None, u'key_manager_api_version': u'v1', > u'workflow_api_version': u'2', 'deferred_help': False, > u'identity_api_version': '3', u'volume_api_version': u'2', 'cert': None, > u'secgroup_source': u'neutron', u'status': u'active', 'debug': True, > u'interface': None, u'disable_vendor_agent': {}} > Auth plugin password selected > auth_config_hook(): {'auth_type': 'password', 'beta_command': False, > u'compute_api_version': u'2', 'key': None, u'database_api_version': u'1.0', > u'metering_api_version': u'2', 'auth_url': 'http://controller:35357/v3/', > u'network_api_version': u'2', u'message': u'', u'image_format': u'qcow2', > 'networks': [], u'image_api_version': u'2', 'verify': True, > u'dns_api_version': u'2', u'object_store_api_version': u'1', 'username': > 'admin', u'container_infra_api_version': u'1', 'verbose_level': 3, > 'region_name': '', 'api_timeout': None, u'baremetal_api_version': u'1', > 'auth': {'user_domain_name': 'Default', 'project_name': 'admin', > 'project_domain_name': 'Default'}, 'default_domain': 'default', > u'container_api_version': u'1', u'image_api_use_tasks': False, > u'floating_ip_source': u'neutron', u'orchestration_api_version': u'1', > 'timing': False, 'password': '***', u'application_catalog_api_version': > u'1', 'cacert': None, u'key_manager_api_version': u'v1', > u'workflow_api_version': u'2', 'deferred_help': False, > u'identity_api_version': '3', u'volume_api_version': u'2', 'cert': None, > u'secgroup_source': u'neutron', u'status': u'active', 'debug': True, > u'interface': None, u'disable_vendor_agent': {}} > command: endpoint list -> openstackclient.identity.v3.endpoint.ListEndpoint > (auth=True) > Auth plugin password selected > auth_config_hook(): {'auth_type': 'password', 'beta_command': False, > u'compute_api_version': u'2', 'key': None, u'database_api_version': u'1.0', > u'metering_api_version': u'2', 'auth_url': 'http://controller:35357/v3/', > u'network_api_version': u'2', u'message': u'', u'image_format': u'qcow2', > 'networks': [], u'image_api_version': u'2', 'verify': True, > u'dns_api_version': u'2', u'object_store_api_version': u'1', 'username': > 'admin', u'container_infra_api_version': u'1', 'verbose_level': 3, > 'region_name': '', 'api_timeout': None, u'baremetal_api_version': u'1', > 'auth': {'user_domain_name': 'Default', 'project_name': 'admin', > 'project_domain_name': 'Default'}, 'default_domain': 'default', > u'container_api_version': u'1', u'image_api_use_tasks': False, > u'floating_ip_source': u'neutron', u'orchestration_api_version': u'1', > 'timing': False, 'password': '***', u'application_catalog_api_version': > u'1', 'cacert': None, u'key_manager_api_version': u'v1', > u'workflow_api_version': u'2', 'deferred_help': False, > u'identity_api_version': '3', u'volume_api_version': u'2', 'cert': None, > u'secgroup_source': u'neutron', u'status': u'active', 'debug': True, > u'interface': None, u'disable_vendor_agent': {}} > Using auth plugin: password > Using parameters {'username': 'admin', 'project_name': 'admin', > 'user_domain_name': 'Default', 'auth_url': 'http://controller:35357/v3/', > 'password': '***', 'project_domain_name': 'Default'} > Get auth_ref > REQ: curl -g -i -X GET http://controller:35357/v3/ -H "Accept: > application/json" -H "User-Agent: osc-lib/1.7.0 keystoneauth1/3.1.0 > python-requests/2.14.2 CPython/2.7.5" > Starting new HTTP connection (1): controller > http://controller:35357 "GET /v3/ HTTP/1.1" 400 347 > RESP: [400] Date: Tue, 20 Feb 2018 16:50:08 GMT Server: Apache/2.4.6 > (CentOS) mod_wsgi/3.4 Python/2.7.5 Content-Length: 347 Connection: close > Content-Type: text/html; charset=iso-8859-1 > RESP BODY: Omitted, Content-Type is set to text/html; charset=iso-8859-1. > Only application/json responses have their bodies logged. > > Request returned failure status: 400 > Failed to discover available identity versions when contacting > http://controller:35357/v3/. Attempting to parse version from URL. > Making authentication request to http://controller:35357/v3/auth/tokens > Resetting dropped connection: controller > http://controller:35357 "POST /v3/auth/tokens HTTP/1.1" 400 347 > Request returned failure status: 400 > Bad Request (HTTP 400) > Traceback (most recent call last): > File "/usr/lib/python2.7/site-packages/cliff/app.py", line 393, in > run_subcommand > self.prepare_to_run_command(cmd) > File "/usr/lib/python2.7/site-packages/openstackclient/shell.py", line > 200, in prepare_to_run_command > return super(OpenStackShell, self).prepare_to_run_command(cmd) > File "/usr/lib/python2.7/site-packages/osc_lib/shell.py", line 437, in > prepare_to_run_command > self.client_manager.auth_ref > File "/usr/lib/python2.7/site-packages/openstackclient/common/clientmanager.py", > line 99, in auth_ref > return super(ClientManager, self).auth_ref > File "/usr/lib/python2.7/site-packages/osc_lib/clientmanager.py", line > 239, in auth_ref > self._auth_ref = self.auth.get_auth_ref(self.session) > File "/usr/lib/python2.7/site-packages/keystoneauth1/identity/generic/base.py", > line 198, in get_auth_ref > return self._plugin.get_auth_ref(session, **kwargs) > File "/usr/lib/python2.7/site-packages/keystoneauth1/identity/v3/base.py", > line 167, in get_auth_ref > authenticated=False, log=False, **rkwargs) > File "/usr/lib/python2.7/site-packages/keystoneauth1/session.py", line > 853, in post > return self.request(url, 'POST', **kwargs) > File "/usr/lib/python2.7/site-packages/osc_lib/session.py", line 40, in > request > resp = super(TimingSession, self).request(url, method, **kwargs) > File "/usr/lib/python2.7/site-packages/positional/__init__.py", line > 101, in inner > return wrapped(*args, **kwargs) > File "/usr/lib/python2.7/site-packages/keystoneauth1/session.py", line > 742, in request > raise exceptions.from_response(resp, method, url) > BadRequest: Bad Request (HTTP 400) > clean_up ListEndpoint: Bad Request (HTTP 400) > Traceback (most recent call last): > File "/usr/lib/python2.7/site-packages/osc_lib/shell.py", line 134, in > run > ret_val = super(OpenStackShell, self).run(argv) > File "/usr/lib/python2.7/site-packages/cliff/app.py", line 279, in run > result = self.run_subcommand(remainder) > File "/usr/lib/python2.7/site-packages/osc_lib/shell.py", line 169, in > run_subcommand > ret_value = super(OpenStackShell, self).run_subcommand(argv) > File "/usr/lib/python2.7/site-packages/cliff/app.py", line 393, in > run_subcommand > self.prepare_to_run_command(cmd) > File "/usr/lib/python2.7/site-packages/openstackclient/shell.py", line > 200, in prepare_to_run_command > return super(OpenStackShell, self).prepare_to_run_command(cmd) > File "/usr/lib/python2.7/site-packages/osc_lib/shell.py", line 437, in > prepare_to_run_command > self.client_manager.auth_ref > File "/usr/lib/python2.7/site-packages/openstackclient/common/clientmanager.py", > line 99, in auth_ref > return super(ClientManager, self).auth_ref > File "/usr/lib/python2.7/site-packages/osc_lib/clientmanager.py", line > 239, in auth_ref > self._auth_ref = self.auth.get_auth_ref(self.session) > File "/usr/lib/python2.7/site-packages/keystoneauth1/identity/generic/base.py", > line 198, in get_auth_ref > return self._plugin.get_auth_ref(session, **kwargs) > File "/usr/lib/python2.7/site-packages/keystoneauth1/identity/v3/base.py", > line 167, in get_auth_ref > authenticated=False, log=False, **rkwargs) > File "/usr/lib/python2.7/site-packages/keystoneauth1/session.py", line > 853, in post > return self.request(url, 'POST', **kwargs) > File "/usr/lib/python2.7/site-packages/osc_lib/session.py", line 40, in > request > resp = super(TimingSession, self).request(url, method, **kwargs) > File "/usr/lib/python2.7/site-packages/positional/__init__.py", line > 101, in inner > return wrapped(*args, **kwargs) > File "/usr/lib/python2.7/site-packages/keystoneauth1/session.py", line > 742, in request > raise exceptions.from_response(resp, method, url) > BadRequest: Bad Request (HTTP 400) > > END return value: 1 > [root at controller ~]# > > > On Tue, Feb 20, 2018 at 10:10 PM, Erik McCormick < > emccormick at cirrusseven.com> wrote: > >> According to your bootstrap and auth file, you're using >> http://controller:35357/v3, but the error you posted said >> http://nagraj_controller:35357/v3/. >> >> Run this: >> >> openstack --debug endpoint list >> >> Paste the output in here. >> >> -Erik >> >> On Tue, Feb 20, 2018 at 11:22 AM, Guru Desai wrote: >> > yes, did the bootstrap commands and everything went fine i.e no errors. >> > admin port 35357 as mentioned in the pike install guide for keystone.. >> > >> > keystone-manage bootstrap --bootstrap-password ADMIN_PASS \ >> > >> > --bootstrap-admin-url http://controller:35357/v3 \ >> > >> > --bootstrap-internal-url http://controller:5000/v3 \ >> > >> > --bootstrap-public-url http://controller:5000/v3 \ >> > >> > --bootstrap-region-id RegionOne >> > >> > >> > >> > On Tue, Feb 20, 2018 at 9:43 PM, Remo Mattei wrote: >> >> >> >> Why are you auth on the admin port? Try the default 5000? >> >> >> >> >> >> On Feb 20, 2018, at 7:58 AM, Erik McCormick < >> emccormick at cirrusseven.com> >> >> wrote: >> >> >> >> Did you run all the keystone-manage bootstrap commands? This looks >> >> like you're trying to create the domain you're supposed to be >> >> authenticating against. >> >> >> >> On Tue, Feb 20, 2018 at 10:34 AM, Guru Desai >> wrote: >> >> >> >> Hi Nithish, >> >> >> >> That part is verified. Below is the snippet of the rc file >> >> >> >> export OS_USERNAME=admin >> >> export OS_PASSWORD=ADMIN_PASS >> >> export OS_PROJECT_NAME=admin >> >> export OS_USER_DOMAIN_NAME=Default >> >> export OS_PROJECT_DOMAIN_NAME=Default >> >> export OS_AUTH_URL=http://controller:35357/v3 >> >> export OS_IDENTITY_API_VERSION=3 >> >> >> >> >> >> [root at controller~]# openstack domain create --description "Default >> Domain" >> >> default >> >> Failed to discover available identity versions when contacting >> >> http://controller:35357/v3/. Attempting to parse version from URL. >> >> Bad Request (HTTP 400) >> >> >> >> >> >> On Tue, Feb 20, 2018 at 7:36 PM, nithish B >> >> wrote: >> >> >> >> >> >> Hi Guru, >> >> This looks more like a problem of finding the credentials. Please >> check if >> >> you sourced the credentials, and you did it right. A sample source >> >> parameters might look like the following: >> >> >> >> export OS_USERNAME=admin >> >> export OS_PASSWORD= >> >> export OS_TENANT_NAME=admin >> >> export OS_AUTH_URL=https://nagaraj_controller:5000/v3 >> >> >> >> Thanks. >> >> >> >> >> >> Regards, >> >> Nitish B. >> >> >> >> On Tue, Feb 20, 2018 at 5:26 AM, Guru Desai wrote: >> >> >> >> >> >> Hi >> >> >> >> I am trying to install openstack pike and following the instruction >> >> mentioned in >> >> https://docs.openstack.org/keystone/pike/install/keystone- >> install-rdo.html >> >> for installing keystone. I am done with the environment setup. But >> after >> >> installing keystone, tried to create a project as mentioned in the >> guide. >> >> It >> >> shows below error. Not seeing any errors in the logs as such. >> Appreciate >> >> any >> >> inputs : >> >> >> >> openstack project create --domain default --description "Service >> >> Project" service >> >> >> >> Failed to discover available identity versions when contacting >> >> http://nagraj_controller:35357/v3/. Attempting to parse version from >> URL. >> >> Bad Request (HTTP 400) >> >> >> >> Guru >> >> >> >> >> >> _______________________________________________ >> >> Mailing list: >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >> >> Post to : openstack at lists.openstack.org >> >> Unsubscribe : >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >> >> >> >> >> >> >> >> >> >> _______________________________________________ >> >> Mailing list: >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >> >> Post to : openstack at lists.openstack.org >> >> Unsubscribe : >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >> >> >> >> >> >> _______________________________________________ >> >> Mailing list: >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >> >> Post to : openstack at lists.openstack.org >> >> Unsubscribe : >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >> >> >> >> >> > >> > > > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/ > openstack > Post to : openstack at lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/ > openstack > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gurud78 at gmail.com Tue Feb 20 17:50:05 2018 From: gurud78 at gmail.com (Guru Desai) Date: Tue, 20 Feb 2018 23:20:05 +0530 Subject: [Openstack] [openstack] [pike] In-Reply-To: References: <0B9EAEDC-77F1-4AB6-B8FC-5888F1DBFB3C@italy1.com> Message-ID: no, everywhere its "controller" now. On Tue, Feb 20, 2018 at 10:56 PM, nithish B wrote: > Wait. Why does it say curl -g -i -X GET http://controller:35357/v3/ -H > "Accept: application/json" - shouldn't it be your "h > ttp://nagraj_controller:35357/v3/ ."? > > Regards, > Nitish B. > > On Tue, Feb 20, 2018 at 11:54 AM, Guru Desai wrote: > >> Here you go with the output : >> >> >> [root at controller ~]# openstack --debug endpoint list >> START with options: [u'--debug', u'endpoint', u'list'] >> options: Namespace(access_key='', access_secret='***', >> access_token='***', access_token_endpoint='', access_token_type='', >> auth_type='', auth_url='http://controller:35357/v3/', cacert=None, >> cert='', client_id='', client_secret='***', cloud='', code='', >> consumer_key='', consumer_secret='***', debug=True, >> default_domain='default', default_domain_id='', default_domain_name='', >> deferred_help=False, discovery_endpoint='', domain_id='', domain_name='', >> endpoint='', identity_provider='', identity_provider_url='', insecure=None, >> interface='', key='', log_file=None, openid_scope='', >> os_beta_command=False, os_compute_api_version='', >> os_identity_api_version='3', os_image_api_version='', >> os_network_api_version='', os_object_api_version='', os_project_id=None, >> os_project_name=None, os_volume_api_version='', passcode='', >> password='***', profile='', project_domain_id='', >> project_domain_name='Default', project_id='', project_name='admin', >> protocol='', redirect_uri='', region_name='', service_provider_endpoint='', >> service_provider_entity_id='', timing=False, token='***', trust_id='', >> url='', user_domain_id='', user_domain_name='Default', user_id='', >> username='admin', verbose_level=3, verify=None) >> Auth plugin password selected >> auth_config_hook(): {'auth_type': 'password', 'beta_command': False, >> u'compute_api_version': u'2', 'key': None, u'database_api_version': u'1.0', >> u'metering_api_version': u'2', 'auth_url': 'http://controller:35357/v3/', >> u'network_api_version': u'2', u'message': u'', u'image_format': u'qcow2', >> 'networks': [], u'image_api_version': u'2', 'verify': True, >> u'dns_api_version': u'2', u'object_store_api_version': u'1', 'username': >> 'admin', u'container_infra_api_version': u'1', 'verbose_level': 3, >> 'region_name': '', 'api_timeout': None, u'baremetal_api_version': u'1', >> 'auth': {'user_domain_name': 'Default', 'project_name': 'admin', >> 'project_domain_name': 'Default'}, 'default_domain': 'default', >> u'container_api_version': u'1', u'image_api_use_tasks': False, >> u'floating_ip_source': u'neutron', u'orchestration_api_version': u'1', >> 'timing': False, 'password': '***', u'application_catalog_api_version': >> u'1', 'cacert': None, u'key_manager_api_version': u'v1', >> u'workflow_api_version': u'2', 'deferred_help': False, >> u'identity_api_version': '3', u'volume_api_version': u'2', 'cert': None, >> u'secgroup_source': u'neutron', u'status': u'active', 'debug': True, >> u'interface': None, u'disable_vendor_agent': {}} >> defaults: {u'auth_type': 'password', u'status': u'active', >> u'compute_api_version': u'2', 'key': None, u'database_api_version': u'1.0', >> 'api_timeout': None, u'baremetal_api_version': u'1', u'image_api_version': >> u'2', u'container_infra_api_version': u'1', u'metering_api_version': >> u'2', u'image_api_use_tasks': False, u'floating_ip_source': u'neutron', >> u'orchestration_api_version': u'1', 'cacert': None, u'network_api_version': >> u'2', u'message': u'', u'image_format': u'qcow2', >> u'application_catalog_api_version': u'1', u'key_manager_api_version': >> u'v1', u'workflow_api_version': u'2', 'verify': True, >> u'identity_api_version': u'2.0', u'volume_api_version': u'2', 'cert': None, >> u'secgroup_source': u'neutron', u'container_api_version': u'1', >> u'dns_api_version': u'2', u'object_store_api_version': u'1', u'interface': >> None, u'disable_vendor_agent': {}} >> cloud cfg: {'auth_type': 'password', 'beta_command': False, >> u'compute_api_version': u'2', 'key': None, u'database_api_version': u'1.0', >> u'metering_api_version': u'2', 'auth_url': 'http://controller:35357/v3/', >> u'network_api_version': u'2', u'message': u'', u'image_format': u'qcow2', >> 'networks': [], u'image_api_version': u'2', 'verify': True, >> u'dns_api_version': u'2', u'object_store_api_version': u'1', 'username': >> 'admin', u'container_infra_api_version': u'1', 'verbose_level': 3, >> 'region_name': '', 'api_timeout': None, u'baremetal_api_version': u'1', >> 'auth': {'user_domain_name': 'Default', 'project_name': 'admin', >> 'project_domain_name': 'Default'}, 'default_domain': 'default', >> u'container_api_version': u'1', u'image_api_use_tasks': False, >> u'floating_ip_source': u'neutron', u'orchestration_api_version': u'1', >> 'timing': False, 'password': '***', u'application_catalog_api_version': >> u'1', 'cacert': None, u'key_manager_api_version': u'v1', >> u'workflow_api_version': u'2', 'deferred_help': False, >> u'identity_api_version': '3', u'volume_api_version': u'2', 'cert': None, >> u'secgroup_source': u'neutron', u'status': u'active', 'debug': True, >> u'interface': None, u'disable_vendor_agent': {}} >> compute API version 2, cmd group openstack.compute.v2 >> network API version 2, cmd group openstack.network.v2 >> image API version 2, cmd group openstack.image.v2 >> volume API version 2, cmd group openstack.volume.v2 >> identity API version 3, cmd group openstack.identity.v3 >> object_store API version 1, cmd group openstack.object_store.v1 >> neutronclient API version 2, cmd group openstack.neutronclient.v2 >> Auth plugin password selected >> auth_config_hook(): {'auth_type': 'password', 'beta_command': False, >> u'compute_api_version': u'2', 'key': None, u'database_api_version': u'1.0', >> u'metering_api_version': u'2', 'auth_url': 'http://controller:35357/v3/', >> u'network_api_version': u'2', u'message': u'', u'image_format': u'qcow2', >> 'networks': [], u'image_api_version': u'2', 'verify': True, >> u'dns_api_version': u'2', u'object_store_api_version': u'1', 'username': >> 'admin', u'container_infra_api_version': u'1', 'verbose_level': 3, >> 'region_name': '', 'api_timeout': None, u'baremetal_api_version': u'1', >> 'auth': {'user_domain_name': 'Default', 'project_name': 'admin', >> 'project_domain_name': 'Default'}, 'default_domain': 'default', >> u'container_api_version': u'1', u'image_api_use_tasks': False, >> u'floating_ip_source': u'neutron', u'orchestration_api_version': u'1', >> 'timing': False, 'password': '***', u'application_catalog_api_version': >> u'1', 'cacert': None, u'key_manager_api_version': u'v1', >> u'workflow_api_version': u'2', 'deferred_help': False, >> u'identity_api_version': '3', u'volume_api_version': u'2', 'cert': None, >> u'secgroup_source': u'neutron', u'status': u'active', 'debug': True, >> u'interface': None, u'disable_vendor_agent': {}} >> Auth plugin password selected >> auth_config_hook(): {'auth_type': 'password', 'beta_command': False, >> u'compute_api_version': u'2', 'key': None, u'database_api_version': u'1.0', >> u'metering_api_version': u'2', 'auth_url': 'http://controller:35357/v3/', >> u'network_api_version': u'2', u'message': u'', u'image_format': u'qcow2', >> 'networks': [], u'image_api_version': u'2', 'verify': True, >> u'dns_api_version': u'2', u'object_store_api_version': u'1', 'username': >> 'admin', u'container_infra_api_version': u'1', 'verbose_level': 3, >> 'region_name': '', 'api_timeout': None, u'baremetal_api_version': u'1', >> 'auth': {'user_domain_name': 'Default', 'project_name': 'admin', >> 'project_domain_name': 'Default'}, 'default_domain': 'default', >> u'container_api_version': u'1', u'image_api_use_tasks': False, >> u'floating_ip_source': u'neutron', u'orchestration_api_version': u'1', >> 'timing': False, 'password': '***', u'application_catalog_api_version': >> u'1', 'cacert': None, u'key_manager_api_version': u'v1', >> u'workflow_api_version': u'2', 'deferred_help': False, >> u'identity_api_version': '3', u'volume_api_version': u'2', 'cert': None, >> u'secgroup_source': u'neutron', u'status': u'active', 'debug': True, >> u'interface': None, u'disable_vendor_agent': {}} >> command: endpoint list -> openstackclient.identity.v3.endpoint.ListEndpoint >> (auth=True) >> Auth plugin password selected >> auth_config_hook(): {'auth_type': 'password', 'beta_command': False, >> u'compute_api_version': u'2', 'key': None, u'database_api_version': u'1.0', >> u'metering_api_version': u'2', 'auth_url': 'http://controller:35357/v3/', >> u'network_api_version': u'2', u'message': u'', u'image_format': u'qcow2', >> 'networks': [], u'image_api_version': u'2', 'verify': True, >> u'dns_api_version': u'2', u'object_store_api_version': u'1', 'username': >> 'admin', u'container_infra_api_version': u'1', 'verbose_level': 3, >> 'region_name': '', 'api_timeout': None, u'baremetal_api_version': u'1', >> 'auth': {'user_domain_name': 'Default', 'project_name': 'admin', >> 'project_domain_name': 'Default'}, 'default_domain': 'default', >> u'container_api_version': u'1', u'image_api_use_tasks': False, >> u'floating_ip_source': u'neutron', u'orchestration_api_version': u'1', >> 'timing': False, 'password': '***', u'application_catalog_api_version': >> u'1', 'cacert': None, u'key_manager_api_version': u'v1', >> u'workflow_api_version': u'2', 'deferred_help': False, >> u'identity_api_version': '3', u'volume_api_version': u'2', 'cert': None, >> u'secgroup_source': u'neutron', u'status': u'active', 'debug': True, >> u'interface': None, u'disable_vendor_agent': {}} >> Using auth plugin: password >> Using parameters {'username': 'admin', 'project_name': 'admin', >> 'user_domain_name': 'Default', 'auth_url': 'http://controller:35357/v3/', >> 'password': '***', 'project_domain_name': 'Default'} >> Get auth_ref >> REQ: curl -g -i -X GET http://controller:35357/v3/ -H "Accept: >> application/json" -H "User-Agent: osc-lib/1.7.0 keystoneauth1/3.1.0 >> python-requests/2.14.2 CPython/2.7.5" >> Starting new HTTP connection (1): controller >> http://controller:35357 "GET /v3/ HTTP/1.1" 400 347 >> RESP: [400] Date: Tue, 20 Feb 2018 16:50:08 GMT Server: Apache/2.4.6 >> (CentOS) mod_wsgi/3.4 Python/2.7.5 Content-Length: 347 Connection: close >> Content-Type: text/html; charset=iso-8859-1 >> RESP BODY: Omitted, Content-Type is set to text/html; charset=iso-8859-1. >> Only application/json responses have their bodies logged. >> >> Request returned failure status: 400 >> Failed to discover available identity versions when contacting >> http://controller:35357/v3/. Attempting to parse version from URL. >> Making authentication request to http://controller:35357/v3/auth/tokens >> Resetting dropped connection: controller >> http://controller:35357 "POST /v3/auth/tokens HTTP/1.1" 400 347 >> Request returned failure status: 400 >> Bad Request (HTTP 400) >> Traceback (most recent call last): >> File "/usr/lib/python2.7/site-packages/cliff/app.py", line 393, in >> run_subcommand >> self.prepare_to_run_command(cmd) >> File "/usr/lib/python2.7/site-packages/openstackclient/shell.py", line >> 200, in prepare_to_run_command >> return super(OpenStackShell, self).prepare_to_run_command(cmd) >> File "/usr/lib/python2.7/site-packages/osc_lib/shell.py", line 437, in >> prepare_to_run_command >> self.client_manager.auth_ref >> File "/usr/lib/python2.7/site-packages/openstackclient/common/clientmanager.py", >> line 99, in auth_ref >> return super(ClientManager, self).auth_ref >> File "/usr/lib/python2.7/site-packages/osc_lib/clientmanager.py", line >> 239, in auth_ref >> self._auth_ref = self.auth.get_auth_ref(self.session) >> File "/usr/lib/python2.7/site-packages/keystoneauth1/identity/generic/base.py", >> line 198, in get_auth_ref >> return self._plugin.get_auth_ref(session, **kwargs) >> File "/usr/lib/python2.7/site-packages/keystoneauth1/identity/v3/base.py", >> line 167, in get_auth_ref >> authenticated=False, log=False, **rkwargs) >> File "/usr/lib/python2.7/site-packages/keystoneauth1/session.py", line >> 853, in post >> return self.request(url, 'POST', **kwargs) >> File "/usr/lib/python2.7/site-packages/osc_lib/session.py", line 40, >> in request >> resp = super(TimingSession, self).request(url, method, **kwargs) >> File "/usr/lib/python2.7/site-packages/positional/__init__.py", line >> 101, in inner >> return wrapped(*args, **kwargs) >> File "/usr/lib/python2.7/site-packages/keystoneauth1/session.py", line >> 742, in request >> raise exceptions.from_response(resp, method, url) >> BadRequest: Bad Request (HTTP 400) >> clean_up ListEndpoint: Bad Request (HTTP 400) >> Traceback (most recent call last): >> File "/usr/lib/python2.7/site-packages/osc_lib/shell.py", line 134, in >> run >> ret_val = super(OpenStackShell, self).run(argv) >> File "/usr/lib/python2.7/site-packages/cliff/app.py", line 279, in run >> result = self.run_subcommand(remainder) >> File "/usr/lib/python2.7/site-packages/osc_lib/shell.py", line 169, in >> run_subcommand >> ret_value = super(OpenStackShell, self).run_subcommand(argv) >> File "/usr/lib/python2.7/site-packages/cliff/app.py", line 393, in >> run_subcommand >> self.prepare_to_run_command(cmd) >> File "/usr/lib/python2.7/site-packages/openstackclient/shell.py", line >> 200, in prepare_to_run_command >> return super(OpenStackShell, self).prepare_to_run_command(cmd) >> File "/usr/lib/python2.7/site-packages/osc_lib/shell.py", line 437, in >> prepare_to_run_command >> self.client_manager.auth_ref >> File "/usr/lib/python2.7/site-packages/openstackclient/common/clientmanager.py", >> line 99, in auth_ref >> return super(ClientManager, self).auth_ref >> File "/usr/lib/python2.7/site-packages/osc_lib/clientmanager.py", line >> 239, in auth_ref >> self._auth_ref = self.auth.get_auth_ref(self.session) >> File "/usr/lib/python2.7/site-packages/keystoneauth1/identity/generic/base.py", >> line 198, in get_auth_ref >> return self._plugin.get_auth_ref(session, **kwargs) >> File "/usr/lib/python2.7/site-packages/keystoneauth1/identity/v3/base.py", >> line 167, in get_auth_ref >> authenticated=False, log=False, **rkwargs) >> File "/usr/lib/python2.7/site-packages/keystoneauth1/session.py", line >> 853, in post >> return self.request(url, 'POST', **kwargs) >> File "/usr/lib/python2.7/site-packages/osc_lib/session.py", line 40, >> in request >> resp = super(TimingSession, self).request(url, method, **kwargs) >> File "/usr/lib/python2.7/site-packages/positional/__init__.py", line >> 101, in inner >> return wrapped(*args, **kwargs) >> File "/usr/lib/python2.7/site-packages/keystoneauth1/session.py", line >> 742, in request >> raise exceptions.from_response(resp, method, url) >> BadRequest: Bad Request (HTTP 400) >> >> END return value: 1 >> [root at controller ~]# >> >> >> On Tue, Feb 20, 2018 at 10:10 PM, Erik McCormick < >> emccormick at cirrusseven.com> wrote: >> >>> According to your bootstrap and auth file, you're using >>> http://controller:35357/v3, but the error you posted said >>> http://nagraj_controller:35357/v3/. >>> >>> Run this: >>> >>> openstack --debug endpoint list >>> >>> Paste the output in here. >>> >>> -Erik >>> >>> On Tue, Feb 20, 2018 at 11:22 AM, Guru Desai wrote: >>> > yes, did the bootstrap commands and everything went fine i.e no errors. >>> > admin port 35357 as mentioned in the pike install guide for keystone.. >>> > >>> > keystone-manage bootstrap --bootstrap-password ADMIN_PASS \ >>> > >>> > --bootstrap-admin-url http://controller:35357/v3 \ >>> > >>> > --bootstrap-internal-url http://controller:5000/v3 \ >>> > >>> > --bootstrap-public-url http://controller:5000/v3 \ >>> > >>> > --bootstrap-region-id RegionOne >>> > >>> > >>> > >>> > On Tue, Feb 20, 2018 at 9:43 PM, Remo Mattei wrote: >>> >> >>> >> Why are you auth on the admin port? Try the default 5000? >>> >> >>> >> >>> >> On Feb 20, 2018, at 7:58 AM, Erik McCormick < >>> emccormick at cirrusseven.com> >>> >> wrote: >>> >> >>> >> Did you run all the keystone-manage bootstrap commands? This looks >>> >> like you're trying to create the domain you're supposed to be >>> >> authenticating against. >>> >> >>> >> On Tue, Feb 20, 2018 at 10:34 AM, Guru Desai >>> wrote: >>> >> >>> >> Hi Nithish, >>> >> >>> >> That part is verified. Below is the snippet of the rc file >>> >> >>> >> export OS_USERNAME=admin >>> >> export OS_PASSWORD=ADMIN_PASS >>> >> export OS_PROJECT_NAME=admin >>> >> export OS_USER_DOMAIN_NAME=Default >>> >> export OS_PROJECT_DOMAIN_NAME=Default >>> >> export OS_AUTH_URL=http://controller:35357/v3 >>> >> export OS_IDENTITY_API_VERSION=3 >>> >> >>> >> >>> >> [root at controller~]# openstack domain create --description "Default >>> Domain" >>> >> default >>> >> Failed to discover available identity versions when contacting >>> >> http://controller:35357/v3/. Attempting to parse version from URL. >>> >> Bad Request (HTTP 400) >>> >> >>> >> >>> >> On Tue, Feb 20, 2018 at 7:36 PM, nithish B >>> >> wrote: >>> >> >>> >> >>> >> Hi Guru, >>> >> This looks more like a problem of finding the credentials. Please >>> check if >>> >> you sourced the credentials, and you did it right. A sample source >>> >> parameters might look like the following: >>> >> >>> >> export OS_USERNAME=admin >>> >> export OS_PASSWORD= >>> >> export OS_TENANT_NAME=admin >>> >> export OS_AUTH_URL=https://nagaraj_controller:5000/v3 >>> >> >>> >> Thanks. >>> >> >>> >> >>> >> Regards, >>> >> Nitish B. >>> >> >>> >> On Tue, Feb 20, 2018 at 5:26 AM, Guru Desai >>> wrote: >>> >> >>> >> >>> >> Hi >>> >> >>> >> I am trying to install openstack pike and following the instruction >>> >> mentioned in >>> >> https://docs.openstack.org/keystone/pike/install/keystone-in >>> stall-rdo.html >>> >> for installing keystone. I am done with the environment setup. But >>> after >>> >> installing keystone, tried to create a project as mentioned in the >>> guide. >>> >> It >>> >> shows below error. Not seeing any errors in the logs as such. >>> Appreciate >>> >> any >>> >> inputs : >>> >> >>> >> openstack project create --domain default --description "Service >>> >> Project" service >>> >> >>> >> Failed to discover available identity versions when contacting >>> >> http://nagraj_controller:35357/v3/. Attempting to parse version from >>> URL. >>> >> Bad Request (HTTP 400) >>> >> >>> >> Guru >>> >> >>> >> >>> >> _______________________________________________ >>> >> Mailing list: >>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >>> >> Post to : openstack at lists.openstack.org >>> >> Unsubscribe : >>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >>> >> >>> >> >>> >> >>> >> >>> >> _______________________________________________ >>> >> Mailing list: >>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >>> >> Post to : openstack at lists.openstack.org >>> >> Unsubscribe : >>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >>> >> >>> >> >>> >> _______________________________________________ >>> >> Mailing list: >>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >>> >> Post to : openstack at lists.openstack.org >>> >> Unsubscribe : >>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >>> >> >>> >> >>> > >>> >> >> >> _______________________________________________ >> Mailing list: http://lists.openstack.org/cgi >> -bin/mailman/listinfo/openstack >> Post to : openstack at lists.openstack.org >> Unsubscribe : http://lists.openstack.org/cgi >> -bin/mailman/listinfo/openstack >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From me at not.mn Tue Feb 20 18:23:08 2018 From: me at not.mn (John Dickinson) Date: Tue, 20 Feb 2018 10:23:08 -0800 Subject: [Openstack] Openstack data replication In-Reply-To: References: Message-ID: For example, you can have 3 replicas stored in a global cluster and get dispersion across multiple geographic regions. But it's all one logical cluster. With container sync, you've got separate clusters with their own durability characteristics. So you would have eg 3 replicas in each cluster, meaning 6x in the data that is synced between two clusters. --John On 18 Feb 2018, at 22:11, aRaviNd wrote: > Thanks John. > > You mentioned sync process in global clusters is more efficient. Could you > please let me know how sync process is more efficient in global clusters > than container sync? > > Aravind > > On Wed, Feb 14, 2018 at 9:10 PM, John Dickinson wrote: > >> A global cluster is one logical cluster that durably stores data across >> all the available failure domains (the highest level of failure domain is >> "region"). For example, if you have 2 regions (ie DCs)and you're using 4 >> replicas, you'll end up with 2 replicas in each. >> >> Container sync is for taking a subset of data stored in one Swift cluster >> and synchronizing it with a different Swift cluster. Each Swift cluster is >> autonomous and handles it's own durability. So, eg if each Swift cluster >> uses 3 replicas, you'll end up with 6x total storage for the data that is >> synced. >> >> In most cases, people use global clusters and are happy with it. It's >> definitely been more used than container sync, and the sync process in >> global clusters is more efficient. >> >> However, deploying a multi-region Swift cluster comes with an extra set of >> challenges above and beyond a single-site deployment. You've got to >> consider more things with your inter-region networking, your network >> routing, the access patterns in each region, your requirements around >> locality, and the data placement of your data. >> >> All of these challenges are solvable, of course. Start with >> https://swift.openstack.org and also feel free to ask here on the mailing >> list or on freenode IRC in #openstack-swift. >> >> Good luck! >> >> John >> >> >> On 14 Feb 2018, at 6:55, aRaviNd wrote: >> >> Hi All, >> >> Whats the difference between container sync and global cluster? Which >> should we use for large data set of 100 Tb ? >> >> Aravind >> >> On Feb 13, 2018 7:52 PM, "aRaviNd" wrote: >> >> Hi All, >> >> We are working on implementing Openstack swift replication and would like >> to know whats the better approach, container sync or global cluster, on >> what scenario we should choose one above the another. >> >> Swift cluster will be used as a backend for web application deployed on >> multiple regions which is configured as active passive using DNS. >> >> Data usage can grow upto 100TB starting with 1TB. What will be better >> option to sync data between regions? >> >> Thank You >> >> Aravind M D >> >> >> _______________________________________________ >> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/ >> openstack >> Post to : openstack at lists.openstack.org >> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/ >> openstack >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: OpenPGP digital signature URL: From pabelanger at redhat.com Wed Feb 21 01:19:59 2018 From: pabelanger at redhat.com (Paul Belanger) Date: Tue, 20 Feb 2018 20:19:59 -0500 Subject: [Openstack] Release Naming for S - time to suggest a name! Message-ID: <20180221011959.GA30957@localhost.localdomain> Hey everybody, Once again, it is time for us to pick a name for our "S" release. Since the associated Summit will be in Berlin, the Geographic Location has been chosen as "Berlin" (State). Nominations are now open. Please add suitable names to https://wiki.openstack.org/wiki/Release_Naming/S_Proposals between now and 2018-03-05 23:59 UTC. In case you don't remember the rules: * Each release name must start with the letter of the ISO basic Latin alphabet following the initial letter of the previous release, starting with the initial release of "Austin". After "Z", the next name should start with "A" again. * The name must be composed only of the 26 characters of the ISO basic Latin alphabet. Names which can be transliterated into this character set are also acceptable. * The name must refer to the physical or human geography of the region encompassing the location of the OpenStack design summit for the corresponding release. The exact boundaries of the geographic region under consideration must be declared before the opening of nominations, as part of the initiation of the selection process. * The name must be a single word with a maximum of 10 characters. Words that describe the feature should not be included, so "Foo City" or "Foo Peak" would both be eligible as "Foo". Names which do not meet these criteria but otherwise sound really cool should be added to a separate section of the wiki page and the TC may make an exception for one or more of them to be considered in the Condorcet poll. The naming official is responsible for presenting the list of exceptional names for consideration to the TC before the poll opens. Let the naming begin. Paul From ambadiaravind at gmail.com Wed Feb 21 17:20:58 2018 From: ambadiaravind at gmail.com (aRaviNd) Date: Wed, 21 Feb 2018 22:50:58 +0530 Subject: [Openstack] Openstack data replication In-Reply-To: References: Message-ID: Thanks John. On Tue, Feb 20, 2018 at 11:53 PM, John Dickinson wrote: > For example, you can have 3 replicas stored in a global cluster and get > dispersion across multiple geographic regions. But it's all one logical > cluster. > > With container sync, you've got separate clusters with their own > durability characteristics. So you would have eg 3 replicas in each > cluster, meaning 6x in the data that is synced between two clusters. > > --John > > > On 18 Feb 2018, at 22:11, aRaviNd wrote: > > Thanks John. > > You mentioned sync process in global clusters is more efficient. Could you > please let me know how sync process is more efficient in global clusters > than container sync? > > Aravind > > On Wed, Feb 14, 2018 at 9:10 PM, John Dickinson wrote: > >> A global cluster is one logical cluster that durably stores data across >> all the available failure domains (the highest level of failure domain is >> "region"). For example, if you have 2 regions (ie DCs)and you're using 4 >> replicas, you'll end up with 2 replicas in each. >> >> Container sync is for taking a subset of data stored in one Swift cluster >> and synchronizing it with a different Swift cluster. Each Swift cluster is >> autonomous and handles it's own durability. So, eg if each Swift cluster >> uses 3 replicas, you'll end up with 6x total storage for the data that is >> synced. >> >> In most cases, people use global clusters and are happy with it. It's >> definitely been more used than container sync, and the sync process in >> global clusters is more efficient. >> >> However, deploying a multi-region Swift cluster comes with an extra set >> of challenges above and beyond a single-site deployment. You've got to >> consider more things with your inter-region networking, your network >> routing, the access patterns in each region, your requirements around >> locality, and the data placement of your data. >> >> All of these challenges are solvable, of course. Start with >> https://swift.openstack.org and also feel free to ask here on the >> mailing list or on freenode IRC in #openstack-swift. >> >> Good luck! >> >> John >> >> >> On 14 Feb 2018, at 6:55, aRaviNd wrote: >> >> Hi All, >> >> Whats the difference between container sync and global cluster? Which >> should we use for large data set of 100 Tb ? >> >> Aravind >> >> On Feb 13, 2018 7:52 PM, "aRaviNd" wrote: >> >> Hi All, >> >> We are working on implementing Openstack swift replication and would like >> to know whats the better approach, container sync or global cluster, on >> what scenario we should choose one above the another. >> >> Swift cluster will be used as a backend for web application deployed on >> multiple regions which is configured as active passive using DNS. >> >> Data usage can grow upto 100TB starting with 1TB. What will be better >> option to sync data between regions? >> >> Thank You >> >> Aravind M D >> >> >> _______________________________________________ >> Mailing list: http://lists.openstack.org/cgi >> -bin/mailman/listinfo/openstack >> Post to : openstack at lists.openstack.org >> Unsubscribe : http://lists.openstack.org/cgi >> -bin/mailman/listinfo/openstack >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ambadiaravind at gmail.com Wed Feb 21 17:23:52 2018 From: ambadiaravind at gmail.com (aRaviNd) Date: Wed, 21 Feb 2018 22:53:52 +0530 Subject: [Openstack] Swift Implementation on VMWare environment Message-ID: Hi All, Does anybody implemented swift cluster on production Vmware environment? if so, what will be ideal VM configuration for a PAC node and an object node. We are planning for a swift cluster of 100TB. Aravind -------------- next part -------------- An HTML attachment was scrubbed... URL: From yedhusastri at gmail.com Thu Feb 22 14:31:19 2018 From: yedhusastri at gmail.com (Yedhu Sastry) Date: Thu, 22 Feb 2018 15:31:19 +0100 Subject: [Openstack] Compute Node not mounting disk to VM's Message-ID: Hello, I have an OpenStack cluster(Newton) which is basically a test cluster. After the regular OS security update and upgrade in all my compute nodes I have problem with New VMs. While launching new VM's Iam getting the Error "ALERT! LABEL=cloudimg-rootfs does not exist Dropping to a shell!" in the console log of VM's. In horizon it is showing as active. Iam booting from image not from volume. Before the update everything was fine. Then I checked all the logs related to OpenStack and I cant find any info related to this. I spent days and I found that after the update libvirt is now using scsi instead of virtio. I dont know why. All the VM's which I created before the update are running fine and is using 'virtio'. Then I tried to manually change the instancexx.xml file of the libvirt to use " " and started the VM again using 'virsh start instancexx'. VM got started and then went to shutdown state. But in the console log I can see VM is getting IP and properly booting without any error and then it goes to poweroff state. 1) Whether this issue is related to the update of libvirt?? If so why libvirt is not using virtio_blk anymore?? Why it is using only virtio_scsi?? Is it possible to change libvirt to use virtio_blk instead of virtio_scsi?? 2) I found nova package version on compute nodes are 14.0.10 and on controller node it is 14.0.1. Whether this is the cause of the problem?? Whether an update in controller node solve this issue?? Iam not sure about this. 3) Why Task status of instancexx is showing as Powering Off in horizon after 'virsh start instancexx' in the compute node?? Why it is not starting the VM with the manually customized .xml file of libvirt?? Any help is really appreciated. -- Thank you for your time and have a nice day, With kind regards, Yedhu Sastri -------------- next part -------------- An HTML attachment was scrubbed... URL: From shilla.saebi at gmail.com Thu Feb 22 19:40:05 2018 From: shilla.saebi at gmail.com (Shilla Saebi) Date: Thu, 22 Feb 2018 14:40:05 -0500 Subject: [Openstack] [User-committee] [Openstack-operators] User Committee Elections Message-ID: Hi Everyone, Just a friendly reminder that voting is still open! Please be sure to check out the candidates - https://goo.gl/x183he - and vote before February 25th, 11:59 UTC. Thanks! Shilla On Mon, Feb 19, 2018 at 1:38 PM, wrote: > I saw election email with the pointer to votes. > > See no reason for stopping it now. But extending vote for 1 more week > makes sense. > > Thanks, > Arkady > > > > *From:* Melvin Hillsman [mailto:mrhillsman at gmail.com] > *Sent:* Monday, February 19, 2018 11:32 AM > *To:* user-committee ; OpenStack > Mailing List ; OpenStack Operators < > openstack-operators at lists.openstack.org>; OpenStack Dev < > openstack-dev at lists.openstack.org>; community at lists.openstack.org > *Subject:* [Openstack-operators] User Committee Elections > > > > Hi everyone, > > > > We had to push the voting back a week if you have been keeping up with the > UC elections[0]. That being said, election officials have sent out the poll > and so voting is now open! Be sure to check out the candidates - > https://goo.gl/x183he - and get your vote in before the poll closes. > > > > [0] https://governance.openstack.org/uc/reference/uc-election-feb2018.html > > > > -- > > Kind regards, > > Melvin Hillsman > > mrhillsman at gmail.com > mobile: (832) 264-2646 > > _______________________________________________ > User-committee mailing list > User-committee at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/user-committee > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Cheung at ezfly.com Fri Feb 23 02:30:27 2018 From: Cheung at ezfly.com (=?utf-8?B?Q2hldW5nIOaliuemrumKkw==?=) Date: Fri, 23 Feb 2018 02:30:27 +0000 Subject: [Openstack] cinder-volume can not live migration Message-ID: <1519353026.15482.2.camel@ezfly.com> Dear: If the volume size is bigger than 50G, I can not live migration lvm volume. I am using openstack pike version. Do I miss something? [root at controller01: cinder]# cinder get-pools +----------+--------------------------+ | Property | Value | +----------+--------------------------+ | name | controller01 at lvm#LVM-SAS | +----------+--------------------------+ +----------+--------------------------+ | Property | Value | +----------+--------------------------+ | name | controller02 at lvm#LVM-SAS | +----------+--------------------------+ [root at controller01: cinder]# openstack volume show 1495b9e9-e56a-468b-a134-59b0a728fa00 +--------------------------------+--------------------------------------+ | Field | Value | +--------------------------------+--------------------------------------+ | attachments | [] | | availability_zone | nova | | bootable | false | | consistencygroup_id | None | | created_at | 2018-02-23T02:15:14.000000 | | description | | | encrypted | False | | id | 1495b9e9-e56a-468b-a134-59b0a728fa00 | | migration_status | error | | multiattach | False | | name | windows | | os-vol-host-attr:host | controller01 at lvm#LVM-SAS | | os-vol-mig-status-attr:migstat | error | | os-vol-mig-status-attr:name_id | None | | os-vol-tenant-attr:tenant_id | 963097c754bf40c5a077f2ae89be36c3 | | properties | | | replication_status | None | | size | 51 | | snapshot_id | None | | source_volid | None | | status | available | | type | LVM-SAS | | updated_at | 2018-02-23T02:18:47.000000 | | user_id | 5bfa4f66825a40709e44a047bd251bcb | +--------------------------------+--------------------------------------+ -- 本電子郵件及其所有附件所含之資訊均屬機密,僅供指定之收件人使用,未經寄件人同意不得揭露、複製或散布本電子郵件。若您並非指定之收件人,請勿使用、保存或揭露本電子郵件之任何部分,並請立即通知寄件人並完全刪除本電子郵件。網路通訊可能含有病毒,收件人應自行確認本郵件是否安全,若因此造成損害,寄件人恕不負責。 The information contained in this communication and attachment is confidential and is intended only for the use of the recipient to which this communication is addressed. Any disclosure, copying or distribution of this communication without the sender's consents is strictly prohibited. If you are not the intended recipient, please notify the sender and delete this communication entirely without using, retaining, or disclosing any of its contents. Internet communications cannot be guaranteed to be virus-free. The recipient is responsible for ensuring that this communication is virus free and the sender accepts no liability for any damages caused by virus transmitted by this communication. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: cinder-volume.log Type: text/x-log Size: 14883 bytes Desc: cinder-volume.log URL: From shilla.saebi at gmail.com Sun Feb 25 23:52:16 2018 From: shilla.saebi at gmail.com (Shilla Saebi) Date: Sun, 25 Feb 2018 18:52:16 -0500 Subject: [Openstack] User Committee Election Results - February 2018 Message-ID: Hello Everyone! Please join me in congratulating 3 newly elected members of the User Committee (UC)! The winners for the 3 seats are: Melvin Hillsman Amy Marrich Yih Leong Sun Full results can be found here: https://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_f7b17dc638013045 Election details can also be found here: https://governance.openstack.org/uc/reference/uc-election-feb2018.html Thank you to all of the candidates, and to all of you who voted and/or promoted the election! Shilla -------------- next part -------------- An HTML attachment was scrubbed... URL: From edgar.magana at workday.com Mon Feb 26 03:38:54 2018 From: edgar.magana at workday.com (Edgar Magana) Date: Mon, 26 Feb 2018 03:38:54 +0000 Subject: [Openstack] [User-committee] User Committee Election Results - February 2018 In-Reply-To: References: Message-ID: <876B0B60-ADB0-4CE4-B1FC-5110622D08BE@workday.com> Congratulations Folks! We have a great team to continue the growing of the UC. Your first action is to assign a chair for the UC and let the board of directors about your election. I wish you all the best! Edgar Magana On Feb 25, 2018, at 3:53 PM, Shilla Saebi > wrote: Hello Everyone! Please join me in congratulating 3 newly elected members of the User Committee (UC)! The winners for the 3 seats are: Melvin Hillsman Amy Marrich Yih Leong Sun Full results can be found here: https://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_f7b17dc638013045 Election details can also be found here: https://governance.openstack.org/uc/reference/uc-election-feb2018.html Thank you to all of the candidates, and to all of you who voted and/or promoted the election! Shilla _______________________________________________ User-committee mailing list User-committee at lists.openstack.org https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_user-2Dcommittee&d=DwIGaQ&c=DS6PUFBBr_KiLo7Sjt3ljp5jaW5k2i9ijVXllEdOozc&r=G0XRJfDQsuBvqa_wpWyDAUlSpeMV4W1qfWqBfctlWwQ&m=uryEDva3eeLA17jjrm73DWw4CrzTezr7HxiJNWpJAs0&s=9y-_pHwzl3ADBVlN7GbhaF8HYVQGvTQjkEvEotC9jfw&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: From jimmy at openstack.org Mon Feb 26 09:40:57 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Mon, 26 Feb 2018 09:40:57 +0000 Subject: [Openstack] [Openstack-operators] User Committee Election Results - February 2018 In-Reply-To: References: Message-ID: <5A93D629.2000704@openstack.org> Congrats everyone! And thanks to the UC Election Committee for managing :) Cheers, Jimmy > Shilla Saebi > February 25, 2018 at 11:52 PM > Hello Everyone! > > Please join me in congratulating 3 newly elected members of the User > Committee (UC)! The winners for the 3 seats are: > > Melvin Hillsman > Amy Marrich > Yih Leong Sun > > Full results can be found here: > https://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_f7b17dc638013045 > > Election details can also be found here: > https://governance.openstack.org/uc/reference/uc-election-feb2018.html > > Thank you to all of the candidates, and to all of you who voted and/or > promoted the election! > > Shilla > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators -------------- next part -------------- An HTML attachment was scrubbed... URL: From correajl at gmail.com Mon Feb 26 11:53:22 2018 From: correajl at gmail.com (Jorge Luiz Correa) Date: Mon, 26 Feb 2018 08:53:22 -0300 Subject: [Openstack] Instances lost connectivity with metadata service. Message-ID: I would like some help to identify (and correct) a problem with instances metadata during booting. My environment is a Mitaka instalation, under Ubuntu 16.04 LTS, with 1 controller, 1 network node and 5 compute nodes. I'm using classic OVS as network setup. The problem ocurs after some period of time in some projects (not all projects at same time). When booting a Ubuntu Cloud Image with cloud-init, instances lost conection with API metadata and doesn't get their information like key-pairs and cloud-init scripts. [ 118.924311] cloud-init[932]: 2018-02-23 18:27:05,003 - url_helper.py[WARNING]: Calling ' http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [101/120s]: request error [HTTPConnectionPool(host='169.254.169.254', port=80): Max retries exceeded with url: /2009-04-04/meta-data/instance-id (Caused by ConnectTimeoutError(, 'Connection to 169.254.169.254 timed out. (connect timeout=50.0)'))] [ 136.959361] cloud-init[932]: 2018-02-23 18:27:23,038 - url_helper.py[WARNING]: Calling ' http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [119/120s]: request error [HTTPConnectionPool(host='169.254.169.254', port=80): Max retries exceeded with url: /2009-04-04/meta-data/instance-id (Caused by ConnectTimeoutError(, 'Connection to 169.254.169.254 timed out. (connect timeout=17.0)'))] [ 137.967469] cloud-init[932]: 2018-02-23 18:27:24,040 - DataSourceEc2.py[CRITICAL]: Giving up on md from [' http://169.254.169.254/2009-04-04/meta-data/instance-id'] after 120 seconds [ 137.972226] cloud-init[932]: 2018-02-23 18:27:24,048 - url_helper.py[WARNING]: Calling ' http://192.168.0.7/latest/meta-data/instance-id' failed [0/120s]: request error [HTTPConnectionPool(host='192.168.0.7', port=80): Max retries exceeded with url: /latest/meta-data/instance-id (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused',))] [ 138.974223] cloud-init[932]: 2018-02-23 18:27:25,053 - url_helper.py[WARNING]: Calling ' http://192.168.0.7/latest/meta-data/instance-id' failed [1/120s]: request error [HTTPConnectionPool(host='192.168.0.7', port=80): Max retries exceeded with url: /latest/meta-data/instance-id (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused',))] After give up 169.254.169.254 it tries 192.168.0.7 that is the dhcp address for the project. I've checked that neutron-l3-agent is running, without errors. On compute node where VM is running, agents and vswitch is running. I could check the namespace of a problematic project and saw an iptables rules redirecting traffic from 169.254.169.254:80 to 0.0.0.0:9697, and there is a process neutron-ns-medata_proxy_ID that opens that port. So, it look like the metadata-proxy is running fine. But, as we can see in logs there is a timeout. If I restart all services on network node sometimes solves the problem. In some cases I have to restart services on controller node (nova-api). So, all work fine for some time and start to have problems again. Where can I investigate to try finding the cause of the problem? I appreciate any help. Thank you! - JLC -------------- next part -------------- An HTML attachment was scrubbed... URL: From igarcia at suse.com Mon Feb 26 12:44:37 2018 From: igarcia at suse.com (Itxaka Serrano Garcia) Date: Mon, 26 Feb 2018 13:44:37 +0100 Subject: [Openstack] Instances lost connectivity with metadata service. In-Reply-To: References: Message-ID: Hi! On 26/02/18 12:53, Jorge Luiz Correa wrote: > I would like some help to identify (and correct) a problem with > instances metadata during booting. My environment is a Mitaka > instalation, under Ubuntu 16.04 LTS, with 1 controller, 1 network node > and 5 compute nodes. I'm using classic OVS as network setup. > > The problem ocurs after some period of time in some projects (not all > projects at same time). When booting a Ubuntu Cloud Image with > cloud-init, instances lost conection with API metadata and doesn't get > their information like key-pairs and cloud-init scripts. > > [  118.924311] cloud-init[932]: 2018-02-23 18:27:05,003 - > url_helper.py[WARNING]: Calling > 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed > [101/120s]: request error [HTTPConnectionPool(host='169.254.169.254', > port=80): Max retries exceeded with url: > /2009-04-04/meta-data/instance-id (Caused by > ConnectTimeoutError( object at 0x7faabcd6fa58>, 'Connection to 169.254.169.254 timed out. > (connect timeout=50.0)'))] > [  136.959361] cloud-init[932]: 2018-02-23 18:27:23,038 - > url_helper.py[WARNING]: Calling > 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed > [119/120s]: request error [HTTPConnectionPool(host='169.254.169.254', > port=80): Max retries exceeded with url: > /2009-04-04/meta-data/instance-id (Caused by > ConnectTimeoutError( object at 0x7faabcd7f240>, 'Connection to 169.254.169.254 timed out. > (connect timeout=17.0)'))] > [  137.967469] cloud-init[932]: 2018-02-23 18:27:24,040 - > DataSourceEc2.py[CRITICAL]: Giving up on md from > ['http://169.254.169.254/2009-04-04/meta-data/instance-id'] after 120 > seconds > [  137.972226] cloud-init[932]: 2018-02-23 18:27:24,048 - > url_helper.py[WARNING]: Calling > 'http://192.168.0.7/latest/meta-data/instance-id' failed [0/120s]: > request error [HTTPConnectionPool(host='192.168.0.7', port=80): Max > retries exceeded with url: /latest/meta-data/instance-id (Caused by > NewConnectionError(' object at 0x7faabcd7fc18>: Failed to establish a new connection: > [Errno 111] Connection refused',))] > [  138.974223] cloud-init[932]: 2018-02-23 18:27:25,053 - > url_helper.py[WARNING]: Calling > 'http://192.168.0.7/latest/meta-data/instance-id' failed [1/120s]: > request error [HTTPConnectionPool(host='192.168.0.7', port=80): Max > retries exceeded with url: /latest/meta-data/instance-id (Caused by > NewConnectionError(' object at 0x7faabcd7fa58>: Failed to establish a new connection: > [Errno 111] Connection refused',))] > > After give up 169.254.169.254 it tries 192.168.0.7 that is the dhcp > address for the project. > > I've checked that neutron-l3-agent is running, without errors. On > compute node where VM is running, agents and vswitch is running. I > could check the namespace of a problematic project and saw an iptables > rules redirecting traffic from 169.254.169.254:80 > to 0.0.0.0:9697 , and > there is a process neutron-ns-medata_proxy_ID  that opens that port. > So, it look like the metadata-proxy is running fine. But, as we can > see in logs there is a timeout. > Did you check if port 80 is listening inside the dhcp namespace with "ip netns exec NAMESPACE netstat -punta" ? We recently hit something similar in which the ns-proxy was up and the metadata-agent as well but the port 80 was missing inside the namespace, a restart fixed it but there was no logs of a failure anywhere so it may be similar. > If I restart all services on network node sometimes solves the > problem. In some cases I have to restart services on controller node > (nova-api). So, all work fine for some time and start to have problems > again. > > Where can I investigate to try finding the cause of the problem? > > I appreciate any help. Thank you! > > - JLC > > > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Post to : openstack at lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack -------------- next part -------------- An HTML attachment was scrubbed... URL: From tobias.urdin at crystone.com Tue Feb 27 08:43:35 2018 From: tobias.urdin at crystone.com (Tobias Urdin) Date: Tue, 27 Feb 2018 08:43:35 +0000 Subject: [Openstack] Instances lost connectivity with metadata service. References: Message-ID: Did some troubleshooting on this myself just some days ago. You want to check out the neutron-metadata-agent log in /var/log/neutron/neutron-metadata-agent.log neutron-metadata-agent in turn connects to your nova keystone endpoint to talk to nova metadata api (nova api port 8775) to get instance information. I had a issue with connectivity between neutron-metadata-agent and nova metadata api causing the issue for me. Should probably check the nova metadata api logs as well. Best regards On 02/26/2018 01:00 PM, Jorge Luiz Correa wrote: I would like some help to identify (and correct) a problem with instances metadata during booting. My environment is a Mitaka instalation, under Ubuntu 16.04 LTS, with 1 controller, 1 network node and 5 compute nodes. I'm using classic OVS as network setup. The problem ocurs after some period of time in some projects (not all projects at same time). When booting a Ubuntu Cloud Image with cloud-init, instances lost conection with API metadata and doesn't get their information like key-pairs and cloud-init scripts. [ 118.924311] cloud-init[932]: 2018-02-23 18:27:05,003 - url_helper.py[WARNING]: Calling 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [101/120s]: request error [HTTPConnectionPool(host='169.254.169.254', port=80): Max retries exceeded with url: /2009-04-04/meta-data/instance-id (Caused by ConnectTimeoutError(, 'Connection to 169.254.169.254 timed out. (connect timeout=50.0)'))] [ 136.959361] cloud-init[932]: 2018-02-23 18:27:23,038 - url_helper.py[WARNING]: Calling 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [119/120s]: request error [HTTPConnectionPool(host='169.254.169.254', port=80): Max retries exceeded with url: /2009-04-04/meta-data/instance-id (Caused by ConnectTimeoutError(, 'Connection to 169.254.169.254 timed out. (connect timeout=17.0)'))] [ 137.967469] cloud-init[932]: 2018-02-23 18:27:24,040 - DataSourceEc2.py[CRITICAL]: Giving up on md from ['http://169.254.169.254/2009-04-04/meta-data/instance-id'] after 120 seconds [ 137.972226] cloud-init[932]: 2018-02-23 18:27:24,048 - url_helper.py[WARNING]: Calling 'http://192.168.0.7/latest/meta-data/instance-id' failed [0/120s]: request error [HTTPConnectionPool(host='192.168.0.7', port=80): Max retries exceeded with url: /latest/meta-data/instance-id (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused',))] [ 138.974223] cloud-init[932]: 2018-02-23 18:27:25,053 - url_helper.py[WARNING]: Calling 'http://192.168.0.7/latest/meta-data/instance-id' failed [1/120s]: request error [HTTPConnectionPool(host='192.168.0.7', port=80): Max retries exceeded with url: /latest/meta-data/instance-id (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused',))] After give up 169.254.169.254 it tries 192.168.0.7 that is the dhcp address for the project. I've checked that neutron-l3-agent is running, without errors. On compute node where VM is running, agents and vswitch is running. I could check the namespace of a problematic project and saw an iptables rules redirecting traffic from 169.254.169.254:80 to 0.0.0.0:9697, and there is a process neutron-ns-medata_proxy_ID that opens that port. So, it look like the metadata-proxy is running fine. But, as we can see in logs there is a timeout. If I restart all services on network node sometimes solves the problem. In some cases I have to restart services on controller node (nova-api). So, all work fine for some time and start to have problems again. Where can I investigate to try finding the cause of the problem? I appreciate any help. Thank you! - JLC -------------- next part -------------- An HTML attachment was scrubbed... URL: From pradhanparas at gmail.com Tue Feb 27 15:26:49 2018 From: pradhanparas at gmail.com (Paras pradhan) Date: Tue, 27 Feb 2018 09:26:49 -0600 Subject: [Openstack] Instances lost connectivity with metadata service. In-Reply-To: References: Message-ID: If this is project specifc usually I run the router-update and fixes the problem. /usr/bin/neutron router-update --admin-state-up False $routerid /usr/bin/neutron router-update --admin-state-up True $routerid On Mon, Feb 26, 2018 at 5:53 AM, Jorge Luiz Correa wrote: > I would like some help to identify (and correct) a problem with instances > metadata during booting. My environment is a Mitaka instalation, under > Ubuntu 16.04 LTS, with 1 controller, 1 network node and 5 compute nodes. > I'm using classic OVS as network setup. > > The problem ocurs after some period of time in some projects (not all > projects at same time). When booting a Ubuntu Cloud Image with cloud-init, > instances lost conection with API metadata and doesn't get their > information like key-pairs and cloud-init scripts. > > [ 118.924311] cloud-init[932]: 2018-02-23 18:27:05,003 - > url_helper.py[WARNING]: Calling 'http://169.254.169.254/2009- > 04-04/meta-data/instance-id' failed [101/120s]: request error > [HTTPConnectionPool(host='169.254.169.254', port=80): Max retries > exceeded with url: /2009-04-04/meta-data/instance-id (Caused by > ConnectTimeoutError( object at 0x7faabcd6fa58>, 'Connection to 169.254.169.254 timed out. > (connect timeout=50.0)'))] > [ 136.959361] cloud-init[932]: 2018-02-23 18:27:23,038 - > url_helper.py[WARNING]: Calling 'http://169.254.169.254/2009- > 04-04/meta-data/instance-id' failed [119/120s]: request error > [HTTPConnectionPool(host='169.254.169.254', port=80): Max retries > exceeded with url: /2009-04-04/meta-data/instance-id (Caused by > ConnectTimeoutError( object at 0x7faabcd7f240>, 'Connection to 169.254.169.254 timed out. > (connect timeout=17.0)'))] > [ 137.967469] cloud-init[932]: 2018-02-23 18:27:24,040 - > DataSourceEc2.py[CRITICAL]: Giving up on md from [' > http://169.254.169.254/2009-04-04/meta-data/instance-id'] after 120 > seconds > [ 137.972226] cloud-init[932]: 2018-02-23 18:27:24,048 - > url_helper.py[WARNING]: Calling 'http://192.168.0.7/latest/ > meta-data/instance-id' failed [0/120s]: request error > [HTTPConnectionPool(host='192.168.0.7', port=80): Max retries exceeded > with url: /latest/meta-data/instance-id (Caused by > NewConnectionError(' object at 0x7faabcd7fc18>: Failed to establish a new connection: [Errno > 111] Connection refused',))] > [ 138.974223] cloud-init[932]: 2018-02-23 18:27:25,053 - > url_helper.py[WARNING]: Calling 'http://192.168.0.7/latest/ > meta-data/instance-id' failed [1/120s]: request error > [HTTPConnectionPool(host='192.168.0.7', port=80): Max retries exceeded > with url: /latest/meta-data/instance-id (Caused by > NewConnectionError(' object at 0x7faabcd7fa58>: Failed to establish a new connection: [Errno > 111] Connection refused',))] > > After give up 169.254.169.254 it tries 192.168.0.7 that is the dhcp > address for the project. > > I've checked that neutron-l3-agent is running, without errors. On compute > node where VM is running, agents and vswitch is running. I could check the > namespace of a problematic project and saw an iptables rules redirecting > traffic from 169.254.169.254:80 to 0.0.0.0:9697, and there is a process > neutron-ns-medata_proxy_ID that opens that port. So, it look like the > metadata-proxy is running fine. But, as we can see in logs there is a > timeout. > > If I restart all services on network node sometimes solves the problem. In > some cases I have to restart services on controller node (nova-api). So, > all work fine for some time and start to have problems again. > > Where can I investigate to try finding the cause of the problem? > > I appreciate any help. Thank you! > > - JLC > > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/ > openstack > Post to : openstack at lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/ > openstack > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eblock at nde.ag Wed Feb 28 14:45:19 2018 From: eblock at nde.ag (Eugen Block) Date: Wed, 28 Feb 2018 14:45:19 +0000 Subject: [Openstack] Compute Node not mounting disk to VM's In-Reply-To: Message-ID: <20180228144519.Horde.IQRwtkWf6QBEm4Qm7gvDXSl@webmail.nde.ag> Hi, unfortunately, I don't have an answer for you, but it seems that you're not alone with this. In the past 10 days or so I have read about very similiar issues multiple times (e.g. [1], [2]). In fact, it sounds like the update could be responsible for these changes. Usually, you can change the disk_bus by specifying glance image properties, something like this: openstack image set --property hw_scsi_model=virtio-scsi --property hw_disk_bus=scsi --property hw_qemu_guest_agent=yes --property os_require_quiesce=yes But I doubt any effect of this, there has to be something else telling libvirt to use scsi instead of virtio. I hope someone else has an idea where to look at since I don't have this issue and can't reproduce it. What is your output for ---cut here--- root at compute:~ # grep -A3 virtio-blk /usr/lib/udev/rules.d/60-persistent-storage.rules # virtio-blk KERNEL=="vd*[!0-9]", ATTRS{serial}=="?*", ENV{ID_SERIAL}="$attr{serial}", SYMLINK+="disk/by-id/virtio-$env{ID_SERIAL}" KERNEL=="vd*[0-9]", ATTRS{serial}=="?*", ENV{ID_SERIAL}="$attr{serial}", SYMLINK+="disk/by-id/virtio-$env{ID_SERIAL}-part%n" ---cut here--- You could also take a look into /etc/glance/metadefs/compute-libvirt-image.json, maybe there is something wrong there, but as I said, I can't really reproduce this. Good luck! [1] https://ask.openstack.org/en/question/112488/libvirt-not-allocating-cpu-and-disk-to-vms-after-the-os-update/ [2] https://bugs.launchpad.net/nova/+bug/1560965 Zitat von Yedhu Sastry : > Hello, > > I have an OpenStack cluster(Newton) which is basically a test cluster. > After the regular OS security update and upgrade in all my compute nodes I > have problem with New VMs. While launching new VM's Iam getting the > Error "ALERT! > LABEL=cloudimg-rootfs does not exist Dropping to a shell!" in the console > log of VM's. In horizon it is showing as active. Iam booting from image not > from volume. Before the update everything was fine. > > Then I checked all the logs related to OpenStack and I cant find any info > related to this. I spent days and I found that after the update libvirt is > now using scsi instead of virtio. I dont know why. All the VM's which I > created before the update are running fine and is using 'virtio'. Then I > tried to manually change the instancexx.xml file of the libvirt to use " > " and started the VM again using 'virsh > start instancexx'. VM got started and then went to shutdown state. But in > the console log I can see VM is getting IP and properly booting without any > error and then it goes to poweroff state. > > > 1) Whether this issue is related to the update of libvirt?? If so why > libvirt is not using virtio_blk anymore?? Why it is using only > virtio_scsi?? Is it possible to change libvirt to use virtio_blk instead of > virtio_scsi?? > > 2) I found nova package version on compute nodes are 14.0.10 and on > controller node it is 14.0.1. Whether this is the cause of the problem?? > Whether an update in controller node solve this issue?? Iam not sure about > this. > > 3) Why Task status of instancexx is showing as Powering Off in horizon > after 'virsh start instancexx' in the compute node?? Why it is not starting > the VM with the manually customized .xml file of libvirt?? > > > Any help is really appreciated. > > > -- > > Thank you for your time and have a nice day, > > > With kind regards, > Yedhu Sastri -- Eugen Block voice : +49-40-559 51 75 NDE Netzdesign und -entwicklung AG fax : +49-40-559 51 77 Postfach 61 03 15 D-22423 Hamburg e-mail : eblock at nde.ag Vorsitzende des Aufsichtsrates: Angelika Mozdzen Sitz und Registergericht: Hamburg, HRB 90934 Vorstand: Jens-U. Mozdzen USt-IdNr. DE 814 013 983 From srelf at ukcloud.com Wed Feb 28 15:19:31 2018 From: srelf at ukcloud.com (Steven Relf) Date: Wed, 28 Feb 2018 15:19:31 +0000 Subject: [Openstack] Compute Node not mounting disk to VM's In-Reply-To: References: Message-ID: <2DE7E082-60A9-4FA0-965B-D40F24A5F27D@ukcloud.com> Hi With regards to this. 3) Why Task status of instancexx is showing as Powering Off in horizon after 'virsh start instancexx' in the compute node?? Why it is not starting the VM with the manually customized .xml file of libvirt?? I think by default if you power on an instance via the virsh command on a hypervisor whilst nova thinks the instance should be shutoff that nova will initiate a shutdown again, to ensure the hypervisor state and the nova state match. I believe it is configurable, but I’m struggling to remember where. Rgds Steve Steven Relf - Technical Authority Cloud Native Infrastructure srelf at ukcloud.com +44 7500 085 864 www.ukcloud.com A8, Cody Technology Park, Ively Road, Farnborough, GU14 0LX Notice: This message contains information that may be privileged or confidential and is the property of UKCloud Ltd. It is intended only for the person to whom it is addressed. If you are not the intended recipient, you are not authorised to read, print, retain, copy, disseminate, distribute, or use this message or any part thereof. If you receive this message in error, please notify the sender immediately and delete all copies of this message. UKCloud reserves the right to monitor all e-mail communications through its networks. UKCloud Ltd is registered in England and Wales: Company No: 07619797. Registered office: Hartham Park, Hartham, Corsham, Wiltshire SN13 0RP. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image801413.png Type: image/png Size: 6421 bytes Desc: image801413.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image645067.png Type: image/png Size: 1986 bytes Desc: image645067.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image388040.png Type: image/png Size: 2017 bytes Desc: image388040.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image121707.png Type: image/png Size: 2290 bytes Desc: image121707.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image948791.png Type: image/png Size: 2199 bytes Desc: image948791.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image150610.jpg Type: image/jpeg Size: 53320 bytes Desc: image150610.jpg URL: