[openstack-dev] [tripleo][manila] Ganesha deployment

Jan Provaznik jprovazn at redhat.com
Thu Apr 13 11:34:19 UTC 2017


On Tue, Apr 11, 2017 at 9:45 PM, Ben Nemec <openstack at nemebean.com> wrote:
>
>
> On 04/11/2017 02:00 PM, Giulio Fidente wrote:
>>
>> On Tue, 2017-04-11 at 16:50 +0200, Jan Provaznik wrote:
>>>
>>> On Mon, Apr 10, 2017 at 6:55 PM, Ben Nemec <openstack at nemebean.com>
>>> wrote:
>>>>
>>>> On 04/10/2017 03:22 AM, Jan Provaznik wrote:
>>>> Well, on second thought it might be possible to make the Storage
>>>> network
>>>> only routable within overcloud Neutron by adding a bridge mapping
>>>> for the
>>>> Storage network and having the admin configure a shared Neutron
>>>> network for
>>>> it.  That would be somewhat more secure since it wouldn't require
>>>> the
>>>> Storage network to be routable by the world.  I also think this
>>>> would work
>>>> today in TripleO with no changes.
>>>>
>>>
>>> This sounds interesting, I was searching for more info how bridge
>>> mapping should be done in this case and how specific setup steps
>>> should look like, but the process is still not clear to me, I would
>>> be
>>> grateful for more details/guidance with this.
>>
>>
>> I think this will be represented in neutron as a provider network,
>> which has to be created by the overcloud admin, after the overcloud
>> deployment is finished
>>
>> While based on Kilo, this was one of the best docs I could find and it
>> includes config examples [1]
>>
>> It assumes that the operator created a bridge mapping for it when
>> deploying the overcloud
>>
>>>> I think the answer here will be the same as for vanilla Ceph.  You
>>>> need to
>>>> make the network routable to instances, and you'd have the same
>>>> options as I
>>>> discussed above.
>>>>
>>>
>>> Yes, it seems that using the mapping to provider network would solve
>>> the existing problem when using ceph directly and when using ganesha
>>> servers in future (it would be just matter of to which network is
>>> exposed).
>>
>>
>> +1
>>
>> regarding the composability questions, I think this represents a
>> "composable HA" scenario where we want to manage a remote service with
>> pacemaker using pacemaker-remote
>>
>> yet at this stage I think we want to add support for new services by
>> running them in containers first (only?) and pacemaker+containers is
>> still a work in progress so there aren't easy answers
>>
>> containers will have access to the host networks though, so the case
>> for a provider network in the overcloud remains valid
>>
>> 1. https://docs.openstack.org/kilo/networking-guide/scenario_provider_o
>> vs.html
>>
>
> I think there are three major pieces that would need to be in place to have
> a storage provider network:
>
> 1) The storage network must be bridged in the net-iso templates.  I don't
> think our default net-iso templates do that, but there are examples of
> bridged networks in them:
> https://github.com/openstack/tripleo-heat-templates/blob/master/network/config/multiple-nics/compute.yaml#L121
> For the rest of the steps I will assume the bridge was named br-storage.
>
> 2) Specify a bridge mapping when deploying the overcloud.  The environment
> file would look something like this (datacentre is the default value, so I'm
> including it too):
>
> parameter_defaults:
>   NeutronBridgeMappings: 'datacentre:br-ex,storage:br-storage'
>
> 3) Create a provider network after deployment as described in the link
> Giulio provided.  The specific command will depend on the network
> architecture, but it would need to include "--provider:physical_network
> storage".
>
> We might need to add the ability to do 3 as part of the deployment,
> depending what is needed for the Ganesha deployment itself.  We've typically
> avoided creating network resources like this in the deployment because of
> the huge variations in what people want, but this might be an exceptional
> case since the network will be a required part of the overcloud.
>
>

Thank you both for your help, based on steps suggested above I was
able to mount ceph volume in user instance when Overcloud was deployed
with net-iso with net-single-nic-with-vlans (which is the easiest one
I can deploy in my virtualenv). For net-single-nic-with-vlans I
skipped creation of additional bridge (since single bridge is used for
all networks in this case) and deployed OC as usually, then I
configured networking:
neutron net-create storage --shared --provider:physical_network
datacentre --provider:network_type vlan --provider:segmentation_id 30
neutron subnet-create --name storage-subnet     --allocation-pool
start=172.16.1.100,end=172.16.1.120    --enable-dhcp storage
172.16.1.0/24

and created user instance with tenant and storage network:
| f7d4e619-c8f5-4de3-a4c3-4120eea818d1 | Server1 | ACTIVE | -
| Running     | default-net=192.168.2.107, 192.168.24.100;
storage=172.16.1.110 |

The obstacle I'm hitting though is that the second interface with
storage network doesn't come up automatically on instance boot:
3: eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group
default qlen 1000
    link/ether fa:16:3e:71:90:df brd ff:ff:ff:ff:ff:ff

from cloud-init log:
ci-info: +++++++++++++++++++++++++++++++Net device
info++++++++++++++++++++++++++++++++
ci-info: +--------+-------+---------------+---------------+-------+-------------------+
ci-info: | Device |   Up  |    Address    |      Mask     | Scope |
 Hw-Address    |
ci-info: +--------+-------+---------------+---------------+-------+-------------------+
ci-info: | eth1:  | False |       .       |       .       |   .   |
fa:16:3e:27:2a:bf |
ci-info: | eth0:  |  True | 192.168.2.107 | 255.255.255.0 |   .   |
fa:16:3e:ba:00:49 |
ci-info: | eth0:  |  True |       .       |       .       |   d   |
fa:16:3e:ba:00:49 |
ci-info: |  lo:   |  True |   127.0.0.1   |   255.0.0.0   |   .   |
     .         |
ci-info: |  lo:   |  True |       .       |       .       |   d   |
     .         |
ci-info: +--------+-------+---------------+---------------+-------+-------------------+

If I manually set IP for eth1, then ceph mount works. We discussed
this with Giulio and he suspects the problem is that DHCP conflicts
with DHCP server running on undercloud for storage network. Any ideas?

Thanks, Jan



More information about the OpenStack-dev mailing list