[openstack-dev] [tricircle] playing tricircle with devstack under two-region configuration

Vega Cai luckyvega.g at gmail.com
Mon Feb 29 08:12:56 UTC 2016


Hi Pengfei,

It's expected that there's no port for 100.0.0.1 since what we attach to
router are 100.0.0.2 and 100.0.0.3.

Try to ping 10.0.2.1 in vm1 and to ping 10.0.1.1 in vm2 to see if you can
get reply. There may be security group rules that prevent vm to be accessed.

The check if all the process is done, please run the following command:

# get router bottom id
neutron --os-region-name Pod1 router-list
neutron --os-region-name Pod2 router-list

# check if each router is correctly processed
neutron --os-region-name Pod1 router-port-list <router-buttom-id>
neutron --os-region-name Pod2 router-port-list <router-buttom-id>
neutron --os-region-name Pod1 router-show <router-buttom-id>
neutron --os-region-name Pod2 router-show <router-buttom-id>

BR
Zhiyuan


On 29 February 2016 at 14:41, 时鹏飞 <shipengfei92 at gmail.com> wrote:

> And, I find ew_bridge has a gateway:
>
> root at node1:/home/stack# neutron subnet-show
> 34405fae-e796-48ae-a6e7-f7943b84a117
> +-------------------+---------------------------------------------------+
> | Field             | Value                                             |
> +-------------------+---------------------------------------------------+
> | allocation_pools  | {"start": "100.0.0.2", "end": "100.0.0.254"}      |
> | cidr              | 100.0.0.0/24                                      |
> | dns_nameservers   |                                                   |
> | enable_dhcp       | False                                             |
> | gateway_ip        | 100.0.0.1                                         |
> | host_routes       |                                                   |
> | id                | 34405fae-e796-48ae-a6e7-f7943b84a117              |
> | ip_version        | 4                                                 |
> | ipv6_address_mode |                                                   |
> | ipv6_ra_mode      |                                                   |
> | name              | ew_bridge_subnet_f55068b6dcfd4b4d9d357b46eedc6644 |
> | network_id        | 0828b209-71a0-481e-9e2c-5c68d2b18387              |
> | subnetpool_id     | f7360e3b-f74c-474f-9ee5-e43657dd077f              |
> | tenant_id         | f55068b6dcfd4b4d9d357b46eedc6644                  |
> +-------------------+---------------------------------------------------+
>
> but in the port there is no device for gateway 100.0.0.1
>
> root at node1:/home/stack# neutron port-list
>
> +--------------------------------------+--------------------------------------+-------------------+----------------------------------------------------------+
> | id                                   | name
>     | mac_address       | fixed_ips
>         |
>
> +--------------------------------------+--------------------------------------+-------------------+----------------------------------------------------------+
> | 5ff4f7c4-5e47-4cc5-8a17-e011938bd930 |
> 5ff4f7c4-5e47-4cc5-8a17-e011938bd930 | fa:16:3e:d0:1e:8e | {"subnet_id":
> "aeb34101-0e0f-402c-94a4-e4d45235d531",    |
> |                                      |
>     |                   | "ip_address": "10.0.2.3"}
>         |
> | 4f2fb455-8cc5-43ea-87cd-4d4d845548a0 |
> 4f2fb455-8cc5-43ea-87cd-4d4d845548a0 | fa:16:3e:ed:d2:49 | {"subnet_id":
> "aeb34101-0e0f-402c-94a4-e4d45235d531",    |
> |                                      |
>     |                   | "ip_address": "10.0.2.1"}
>         |
> | 67e16de6-ad02-4e5f-b035-6525a5a6dcc4 |
>     | fa:16:3e:c6:07:65 | {"subnet_id":
> "aeb34101-0e0f-402c-94a4-e4d45235d531",    |
> |                                      |
>     |                   | "ip_address": "10.0.2.2"}
>         |
> | 15cc5862-443f-44a0-a060-33a66b0686cc |
> 15cc5862-443f-44a0-a060-33a66b0686cc | fa:16:3e:b9:2c:14 | {"subnet_id":
> "34405fae-e796-48ae-a6e7-f7943b84a117",    |
> |                                      |
>     |                   | "ip_address": "100.0.0.3"}
>         |
> | 5367f76e-6a00-478d-b9b8-54d0b4754da7 |
> 5367f76e-6a00-478d-b9b8-54d0b4754da7 | fa:16:3e:01:70:36 | {"subnet_id":
> "34405fae-e796-48ae-a6e7-f7943b84a117",    |
> |                                      |
>     |                   | "ip_address": "100.0.0.2"}
>         |
> | 1be92584-a800-474f-a105-877e8a30875f |
> 1be92584-a800-474f-a105-877e8a30875f | fa:16:3e:5d:0d:59 | {"subnet_id":
> "6a1cafcd-6bc7-4788-8a38-b26616dd43a4",    |
> |                                      |
>     |                   | "ip_address": "10.0.1.3"}
>         |
> | fda27c48-ce7c-4852-b9f6-4af7da2d0b3d |
> fda27c48-ce7c-4852-b9f6-4af7da2d0b3d | fa:16:3e:26:f3:40 | {"subnet_id":
> "6a1cafcd-6bc7-4788-8a38-b26616dd43a4",    |
> |                                      |
>     |                   | "ip_address": "10.0.1.1"}
>         |
> | 7b51933e-edb4-4210-9df4-e346fe99fded |
>     | fa:16:3e:8f:d7:ad | {"subnet_id":
> "6a1cafcd-6bc7-4788-8a38-b26616dd43a4",    |
> |                                      |
>     |                   | "ip_address": "10.0.1.2"}
>         |
> | 4b5a8392-abfc-4261-b98f-e39e377991a7 |
>     | fa:16:3e:ae:bc:4d | "<neutron.db.models_v2.IPAllocation[object at
>         |
> |                                      |
>     |                   | 7f9d216c99d0]
> {port_id=u'4b5a8392-abfc-4261-b98f-        |
> |                                      |
>     |                   | e39e377991a7', ip_address=u'10.50.11.30',
>         |
> |                                      |
>     |                   |
> subnet_id=u'b50d19d5-435e-4c46-a8fc-cdfc02fbdd29',       |
> |                                      |
>     |                   |
> network_id=u'e268445b-398f-40c0-ba03-6ebcc33a64a7'}>"    |
> | aee04b98-4dfb-431a-8b8f-50c1f6a67fc6 |
>     | fa:16:3e:3e:d8:ac | "<neutron.db.models_v2.IPAllocation[object at
>         |
> |                                      |
>     |                   | 7f9d217eba10]
> {port_id=u'aee04b98-4dfb-431a-8b8f-        |
> |                                      |
>     |                   | 50c1f6a67fc6', ip_address=u'10.50.11.31',
>         |
> |                                      |
>     |                   |
> subnet_id=u'b50d19d5-435e-4c46-a8fc-cdfc02fbdd29',       |
> |                                      |
>     |                   |
> network_id=u'e268445b-398f-40c0-ba03-6ebcc33a64a7'}>"    |
>
> +--------------------------------------+--------------------------------------+-------------------+----------------------------------------------------------+
> root at node1:/home/stack#
>
>
> Best Regards
>
> Pengfei Shi (时鹏飞)
> Shanghai Jiao Tong University
> Network & Information Center Room 304
>
> shipengfei92 at gmail.com
>
>
>
>
>
>
> On Feb 29, 2016, at 12:55 PM, 时鹏飞 <shipengfei92 at gmail.com> wrote:
>
> Hi, Joe and Zhiyuan
>
> Sorry for bother you again.
>
> When I finish install two cross pod nodes and try ping command, it cannot
> reply.
>
> In vm1: ip:10.0.1.3
> ping 10.0.1.1 gateway is OK;
>                 ping 10.0.2.3(vm2) no reply.
>
> In vm2: ip:10.0.2.3
> ping 10.0.2.1 gateway is OK;
>                 ping 10.0.1.3(vm1) no reply.
>
>
>
> And I use wireshark to capture the flows in internal VLAN network:
>
> <PastedGraphic-1.tiff>
>
> <PastedGraphic-2.tiff>
>
>
> I find that, the Ping has reply, but it cannot be recognised.
>
> Is the problems caused by ew_bridge_net?
>
>
>
> some info as the below:
>
>
> stack at node1:~$ neutron net-list
>
> +--------------------------------------+------------------------------------------------+-----------------------------------------------------+
> | id                                   | name
>               | subnets                                             |
>
> +--------------------------------------+------------------------------------------------+-----------------------------------------------------+
> | e268445b-398f-40c0-ba03-6ebcc33a64a7 | ext-net
>               | b50d19d5-435e-4c46-a8fc-cdfc02fbdd29 10.50.11.0/26  |
> | 0828b209-71a0-481e-9e2c-5c68d2b18387 |
> ew_bridge_net_f55068b6dcfd4b4d9d357b46eedc6644 |
> 34405fae-e796-48ae-a6e7-f7943b84a117 100.0.0.0/24   |
> | 38066909-c315-41d0-a83e-11aebab1c51d | net2
>               | aeb34101-0e0f-402c-94a4-e4d45235d531 10.0.2.0/24    |
> | acb5039a-c076-49ff-9db6-3879630dc531 | net1
>               | 6a1cafcd-6bc7-4788-8a38-b26616dd43a4 10.0.1.0/24    |
> | afd0ea21-8e0e-4951-97bb-296c18e2ed6a |
> ns_bridge_net_f55068b6dcfd4b4d9d357b46eedc6644 |
> a241e73f-8ae6-4a9f-9e03-445f70a11629 100.128.0.0/24 |
>
> +--------------------------------------+------------------------------------------------+-----------------------------------------------------+
>
>
> stack at node1:~$ neutron port-list
>
> +--------------------------------------+--------------------------------------+-------------------+----------------------------------------------------------+
> | id                                   | name
>     | mac_address       | fixed_ips
>         |
>
> +--------------------------------------+--------------------------------------+-------------------+----------------------------------------------------------+
> | 5ff4f7c4-5e47-4cc5-8a17-e011938bd930 |
> 5ff4f7c4-5e47-4cc5-8a17-e011938bd930 | fa:16:3e:d0:1e:8e | {"subnet_id":
> "aeb34101-0e0f-402c-94a4-e4d45235d531",    |
> |                                      |
>     |                   | "ip_address": "10.0.2.3"}
>         |
> | 4f2fb455-8cc5-43ea-87cd-4d4d845548a0 |
> 4f2fb455-8cc5-43ea-87cd-4d4d845548a0 | fa:16:3e:ed:d2:49 | {"subnet_id":
> "aeb34101-0e0f-402c-94a4-e4d45235d531",    |
> |                                      |
>     |                   | "ip_address": "10.0.2.1"}
>         |
> | 67e16de6-ad02-4e5f-b035-6525a5a6dcc4 |
>     | fa:16:3e:c6:07:65 | {"subnet_id":
> "aeb34101-0e0f-402c-94a4-e4d45235d531",    |
> |                                      |
>     |                   | "ip_address": "10.0.2.2"}
>         |
> | 15cc5862-443f-44a0-a060-33a66b0686cc |
> 15cc5862-443f-44a0-a060-33a66b0686cc | fa:16:3e:b9:2c:14 | {"subnet_id":
> "34405fae-e796-48ae-a6e7-f7943b84a117",    |
> |                                      |
>     |                   | "ip_address": "100.0.0.3"}
>         |
> | 5367f76e-6a00-478d-b9b8-54d0b4754da7 |
> 5367f76e-6a00-478d-b9b8-54d0b4754da7 | fa:16:3e:01:70:36 | {"subnet_id":
> "34405fae-e796-48ae-a6e7-f7943b84a117",    |
> |                                      |
>     |                   | "ip_address": "100.0.0.2"}
>         |
> | 1be92584-a800-474f-a105-877e8a30875f |
> 1be92584-a800-474f-a105-877e8a30875f | fa:16:3e:5d:0d:59 | {"subnet_id":
> "6a1cafcd-6bc7-4788-8a38-b26616dd43a4",    |
> |                                      |
>     |                   | "ip_address": "10.0.1.3"}
>         |
> | fda27c48-ce7c-4852-b9f6-4af7da2d0b3d |
> fda27c48-ce7c-4852-b9f6-4af7da2d0b3d | fa:16:3e:26:f3:40 | {"subnet_id":
> "6a1cafcd-6bc7-4788-8a38-b26616dd43a4",    |
> |                                      |
>     |                   | "ip_address": "10.0.1.1"}
>         |
> | 7b51933e-edb4-4210-9df4-e346fe99fded |
>     | fa:16:3e:8f:d7:ad | {"subnet_id":
> "6a1cafcd-6bc7-4788-8a38-b26616dd43a4",    |
> |                                      |
>     |                   | "ip_address": "10.0.1.2"}
>         |
> | 4b5a8392-abfc-4261-b98f-e39e377991a7 |
>     | fa:16:3e:ae:bc:4d | "<neutron.db.models_v2.IPAllocation[object at
>         |
> |                                      |
>     |                   | 7f9d252aad90]
> {port_id=u'4b5a8392-abfc-4261-b98f-        |
> |                                      |
>     |                   | e39e377991a7', ip_address=u'10.50.11.30',
>         |
> |                                      |
>     |                   |
> subnet_id=u'b50d19d5-435e-4c46-a8fc-cdfc02fbdd29',       |
> |                                      |
>     |                   |
> network_id=u'e268445b-398f-40c0-ba03-6ebcc33a64a7'}>"    |
> | aee04b98-4dfb-431a-8b8f-50c1f6a67fc6 |
>     | fa:16:3e:3e:d8:ac | "<neutron.db.models_v2.IPAllocation[object at
>         |
> |                                      |
>     |                   | 7f9d2174a710]
> {port_id=u'aee04b98-4dfb-431a-8b8f-        |
> |                                      |
>     |                   | 50c1f6a67fc6', ip_address=u'10.50.11.31',
>         |
> |                                      |
>     |                   |
> subnet_id=u'b50d19d5-435e-4c46-a8fc-cdfc02fbdd29',       |
> |                                      |
>     |                   |
> network_id=u'e268445b-398f-40c0-ba03-6ebcc33a64a7'}>"    |
>
> +--------------------------------------+--------------------------------------+-------------------+----------------------------------------------------------+
>
>
>
>
> Best Regards
>
> Pengfei Shi (时鹏飞)
> Shanghai Jiao Tong University
> Network & Information Center Room 304
>
> shipengfei92 at gmail.com
>
>
>
>
>
>
> On Feb 29, 2016, at 8:36 AM, joehuang <joehuang at huawei.com> wrote:
>
> Hi, Pengfei,
>
> Volume type is not supported in Tricircle yet.
>
> Best Regards
> Chaoyi Huang ( Joe Huang )
>
> *From:* 时鹏飞 [mailto:shipengfei92 at gmail.com <shipengfei92 at gmail.com>]
> *Sent:* Sunday, February 28, 2016 9:36 AM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [tricircle] playing tricircle with
> devstack under two-region configuration
>
> Hi,Joe and Zhiyuan,
>
> When I install the devstack in Node2, it occur errors when test
> cinder_volume_types:
>
> 2016-02-27 17:05:00.920 | + ./devstack/stack.sh:main:L1267:
> create_volume_types
> 2016-02-27 17:05:00.920 | +
> /home/stack/devstack/lib/cinder:create_volume_types:L549:
> is_service_enabled c-api
> 2016-02-27 17:05:00.927 | +
> /home/stack/devstack/functions-common:is_service_enabled:L2026:   return 0
> 2016-02-27 17:05:00.927 | +
> /home/stack/devstack/lib/cinder:create_volume_types:L549:   [[ -n
> lvm:lvmdriver-1 ]]
> 2016-02-27 17:05:00.927 | +
> /home/stack/devstack/lib/cinder:create_volume_types:L550:   local be be_name
> 2016-02-27 17:05:00.928 | +
> /home/stack/devstack/lib/cinder:create_volume_types:L551:   for be in
> '${CINDER_ENABLED_BACKENDS//,/ }'
> 2016-02-27 17:05:00.928 | +
> /home/stack/devstack/lib/cinder:create_volume_types:L552:
> be_name=lvmdriver-1
> 2016-02-27 17:05:00.928 | +
> /home/stack/devstack/lib/cinder:create_volume_types:L553:   openstack
> volume type create --property volume_backend_name=lvmdriver-1 lvmdriver-1
> 2016-02-27 17:05:02.937 | Unable to establish connection to
> http://10.50.11.5:19997/v2/00d182cbe7664762809f4f5a0866635d/types
> 2016-02-27 17:05:03.025 | +
> /home/stack/devstack/lib/cinder:create_volume_types:L1:   exit_trap
> 2016-02-27 17:05:03.025 | + ./devstack/stack.sh:exit_trap:L474:   local r=1
> 2016-02-27 17:05:03.027 | ++ ./devstack/stack.sh:exit_trap:L475:   jobs -p
> 2016-02-27 17:05:03.028 | + ./devstack/stack.sh:exit_trap:L475:   jobs=
> 2016-02-27 17:05:03.028 | + ./devstack/stack.sh:exit_trap:L478:   [[ -n ''
> ]]
> 2016-02-27 17:05:03.028 | + ./devstack/stack.sh:exit_trap:L484:
> kill_spinner
> 2016-02-27 17:05:03.028 | + ./devstack/stack.sh:kill_spinner:L370:   '['
> '!' -z '' ']'
> 2016-02-27 17:05:03.029 | + ./devstack/stack.sh:exit_trap:L486:   [[ 1 -ne
> 0 ]]
> 2016-02-27 17:05:03.029 | + ./devstack/stack.sh:exit_trap:L487:   echo
> 'Error on exit'
> 2016-02-27 17:05:03.029 | Error on exit
> 2016-02-27 17:05:03.029 | + ./devstack/stack.sh:exit_trap:L488:
> generate-subunit 1456592181 522 fail
> 2016-02-27 17:05:03.375 | + ./devstack/stack.sh:exit_trap:L489:   [[ -z
> /opt/stack/logs ]]
> 2016-02-27 17:05:03.375 | + ./devstack/stack.sh:exit_trap:L492:
> /home/stack/devstack/tools/worlddump.py -d /opt/stack/logs
> stack at node2:~$ 2016-02-27 17:05:03.843 | +
> ./devstack/stack.sh:exit_trap:L498:   exit 1
>
>
>
> Best Regards
> Pengfei Shi (时鹏飞)
> Shanghai Jiao Tong University
> Network & Information Center Room 304
>
>
> shipengfei92 at gmail.com
>
>
>
>
>
>
> On Feb 24, 2016, at 9:29 AM, Yipei Niu <newypei at gmail.com> wrote:
>
> Hi Joe and Zhiyuan,
>
> My VM has recovered. When I re-install devstack in node1, I encounter the
> following errors.
>
> The info in stack.sh.log is as follows:
>
> 2016-02-23 11:18:27.238 | Error: Service n-sch is not running
> 2016-02-23 11:18:27.238 | +
> /home/stack/devstack/functions-common:service_check:L1625:   '[' -n
> /opt/stack/status/stack/n-sch.failure ']'
> 2016-02-23 11:18:27.238 | +
> /home/stack/devstack/functions-common:service_check:L1626:   die 1626 'More
> details about the above errors can be found with screen, with
> ./rejoin-stack.sh'
> 2016-02-23 11:18:27.238 | +
> /home/stack/devstack/functions-common:die:L186:   local exitcode=0
> 2016-02-23 11:18:27.239 | [Call Trace]
> 2016-02-23 11:18:27.239 | ./stack.sh:1354:service_check
> 2016-02-23 11:18:27.239 | /home/stack/devstack/functions-common:1626:die
> 2016-02-23 11:18:27.261 | [ERROR]
> /home/stack/devstack/functions-common:1626 More details about the above
> errors can be found with screen, with ./rejoin-stack.sh
> 2016-02-23 11:18:28.271 | Error on exit
> 2016-02-23 11:18:28.953 | df: '/run/user/112/gvfs': Permission denied
>
> The info in n-sch.log is as follows:
>
> stack at nyp-VirtualBox:~/devstack$ /usr/local/bin/nova-scheduler
> --config-file /et ^Mc/nova/nova.conf & echo $!
> >/opt/stack/status/stack/n-sch.pid; fg || echo "n-sch ^M failed to start" |
> tee "/opt/stack/status/stack/n-sch.failure"
> [1] 29467
> /usr/local/bin/nova-scheduler --config-file /etc/nova/nova.conf
> 2016-02-23 19:13:00.050 ^[[00;32mDEBUG oslo_db.api [^[[00;36m-^[[00;32m]
> ^[[01;35m^[[00;32mLoading backend 'sqlalchemy' from
> 'nova.db.sqlalchemy.api'^[[00m ^[[00;33mfrom (pid=29467) _load_backend
> /usr/local/lib/python2.7/dist-packages/oslo_db/api.py:238^[[00m
> 2016-02-23 19:13:00.300 ^[[01;33mWARNING
> oslo_reports.guru_meditation_report [^[[00;36m-^[[01;33m]
> ^[[01;35m^[[01;33mGuru mediation now registers SIGUSR1 and SIGUSR2 by
> default for backward compatibility. SIGUSR1 will no longer be registered in
> a future release, so please use SIGUSR2 to generate reports.^[[00m
> 2016-02-23 19:13:00.304 ^[[01;31mCRITICAL nova [^[[00;36m-^[[01;31m]
> ^[[01;35m^[[01;31mValueError: Empty module name
> ^[[00m
> ^[[01;31m2016-02-23 19:13:00.304 TRACE nova ^[[01;35m^[[00mTraceback (most
> recent call last):
> ^[[01;31m2016-02-23 19:13:00.304 TRACE nova ^[[01;35m^[[00m  File
> "/usr/local/bin/nova-scheduler", line 10, in <module>
> ^[[01;31m2016-02-23 19:13:00.304 TRACE nova ^[[01;35m^[[00m
>  sys.exit(main())
> ^[[01;31m2016-02-23 19:13:00.304 TRACE nova ^[[01;35m^[[00m  File
> "/opt/stack/nova/nova/cmd/scheduler.py", line 43, in main
> ^[[01;31m2016-02-23 19:13:00.304 TRACE nova ^[[01;35m^[[00m
>  topic=CONF.scheduler_topic)
> ^[[01;31m2016-02-23 19:13:00.304 TRACE nova ^[[01;35m^[[00m  File
> "/opt/stack/nova/nova/service.py", line 281, in create
> ^[[01;31m2016-02-23 19:13:00.304 TRACE nova ^[[01;35m^[[00m
>  db_allowed=db_allowed)
> ^[[01;31m2016-02-23 19:13:00.304 TRACE nova ^[[01;35m^[[00m  File
> "/opt/stack/nova/nova/service.py", line 167, in __init__
> ^[[01;31m2016-02-23 19:13:00.304 TRACE nova ^[[01;35m^[[00m
>  self.manager = manager_class(host=self.host, *args, **kwargs)
> ^[[01;31m2016-02-23 19:13:00.304 TRACE nova ^[[01;35m^[[00m  File
> "/opt/stack/nova/nova/scheduler/manager.py", line 49, in __init__
> ^[[01;31m2016-02-23 19:13:00.304 TRACE nova ^[[01;35m^[[00m    self.driver
> = importutils.import_object(scheduler_driver)
> ^[[01;31m2016-02-23 19:13:00.304 TRACE nova ^[[01;35m^[[00m  File
> "/usr/local/lib/python2.7/dist-packages/oslo_utils/importutils.py", line
> 44, in import_object
> ^[[01;31m2016-02-23 19:13:00.304 TRACE nova ^[[01;35m^[[00m    return
> import_class(import_str)(*args, **kwargs)
> ^[[01;31m2016-02-23 19:13:00.304 TRACE nova ^[[01;35m^[[00m  File
> "/usr/local/lib/python2.7/dist-packages/oslo_utils/importutils.py", line
> 30, in import_class
> ^[[01;31m2016-02-23 19:13:00.304 TRACE nova ^[[01;35m^[[00m
>  __import__(mod_str)
> ^[[01;31m2016-02-23 19:13:00.304 TRACE nova ^[[01;35m^[[00mValueError:
> Empty module name
> ^[[01;31m2016-02-23 19:13:00.304 TRACE nova ^[[01;35m^[[00m
> n-sch failed to start
>
>
> Best regards,
> Yipei
>
> On Tue, Feb 23, 2016 at 10:23 AM, Yipei Niu <newypei at gmail.com> wrote:
> Hi Jeo,
>
> I have checked. The Neutron API has not started, but no process is
> listening 9696.
>
> Best regards,
> Yipei
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20160229/6ec14545/attachment.html>


More information about the OpenStack-dev mailing list