Baremetal attach volume in Multi-tenancy
Hi, I am looking for a mechanism that can be used for baremetal attach volume in a multi-tenant scenario. In addition we use ceph as the backend storage for cinder. Can anybody give me some advice?
To attach to baremetal instance, you will need to install the cinderclient along with the python-brick-cinderclient-extension inside the instance itself. On Wed, May 8, 2019 at 11:15 AM zack chen <zackchen517@gmail.com> wrote:
Hi, I am looking for a mechanism that can be used for baremetal attach volume in a multi-tenant scenario. In addition we use ceph as the backend storage for cinder.
Can anybody give me some advice?
Thanks! Yes, I have seen this approach. However, the baremetal instance must be able to communicate with the openstack api network,storage network If I use the iscsi or rbd driver as the cinder volume driver. This may have some security risks in a multi-tenant scenario. How do I ensure that the storage network between different tenants is isolated and able to communicate with the platform's storage network. Walter Boring <waboring@hemna.com> 于2019年5月8日周三 下午11:28写道:
To attach to baremetal instance, you will need to install the cinderclient along with the python-brick-cinderclient-extension inside the instance itself.
On Wed, May 8, 2019 at 11:15 AM zack chen <zackchen517@gmail.com> wrote:
Hi, I am looking for a mechanism that can be used for baremetal attach volume in a multi-tenant scenario. In addition we use ceph as the backend storage for cinder.
Can anybody give me some advice?
On 08/05, zack chen wrote:
Hi, I am looking for a mechanism that can be used for baremetal attach volume in a multi-tenant scenario. In addition we use ceph as the backend storage for cinder.
Can anybody give me some advice?
Hi, Is this a stand alone Cinder deployment or a normal Cinder in OpenStack deployment? What storage backend will you be using? What storage protocol? iSCSI, FC, RBD...? Depending on these you can go with Walter's suggestion of using cinderclient and its extension (which in general is the best way to go), or you may prefer writing a small python script that uses OS-Brick and makes the REST API calls directly. Cheers, Gorka.
This is a normal Cinder in Openstack deployment I'm using ceph as cinder backend, RBD drvier. My ideas the instance should communicate with Openstack platform storage network via the vrouter provided by neutron. The vrouter gateway should communicate with Openstack platform. is or right? Gorka Eguileor <geguileo@redhat.com> 于2019年5月9日周四 下午5:28写道:
On 08/05, zack chen wrote:
Hi, I am looking for a mechanism that can be used for baremetal attach volume in a multi-tenant scenario. In addition we use ceph as the backend storage for cinder.
Can anybody give me some advice?
Hi,
Is this a stand alone Cinder deployment or a normal Cinder in OpenStack deployment?
What storage backend will you be using?
What storage protocol? iSCSI, FC, RBD...?
Depending on these you can go with Walter's suggestion of using cinderclient and its extension (which in general is the best way to go), or you may prefer writing a small python script that uses OS-Brick and makes the REST API calls directly.
Cheers, Gorka.
On 10/05, zack chen wrote:
This is a normal Cinder in Openstack deployment
I'm using ceph as cinder backend, RBD drvier.
Hi, If you are using a Ceph/RBD cluster then there are some things to take into consideration: - You need to have the ceph-common package installed in the system. - The images are mounted using the kernel module, so you have to be careful with the features that are enabled in the images. - If I'm not mistaken the RBD attach using the cinderclient extension will fail if you don't have the configuration and credentials file already in the system.
My ideas the instance should communicate with Openstack platform storage network via the vrouter provided by neutron. The vrouter gateway should communicate with Openstack platform. is or right?
I can't help you on the network side, since I don't know anything about Neutron. Cheers, Gorka.
Gorka Eguileor <geguileo@redhat.com> 于2019年5月9日周四 下午5:28写道:
On 08/05, zack chen wrote:
Hi, I am looking for a mechanism that can be used for baremetal attach volume in a multi-tenant scenario. In addition we use ceph as the backend storage for cinder.
Can anybody give me some advice?
Hi,
Is this a stand alone Cinder deployment or a normal Cinder in OpenStack deployment?
What storage backend will you be using?
What storage protocol? iSCSI, FC, RBD...?
Depending on these you can go with Walter's suggestion of using cinderclient and its extension (which in general is the best way to go), or you may prefer writing a small python script that uses OS-Brick and makes the REST API calls directly.
Cheers, Gorka.
Hi, Thanks for your reply. I saw that ceph already has the Iscsi Gateway. Does the cinder project have such a driver? Gorka Eguileor <geguileo@redhat.com> 于2019年5月10日周五 下午6:39写道:
On 10/05, zack chen wrote:
This is a normal Cinder in Openstack deployment
I'm using ceph as cinder backend, RBD drvier.
Hi,
If you are using a Ceph/RBD cluster then there are some things to take into consideration:
- You need to have the ceph-common package installed in the system.
- The images are mounted using the kernel module, so you have to be careful with the features that are enabled in the images.
- If I'm not mistaken the RBD attach using the cinderclient extension will fail if you don't have the configuration and credentials file already in the system.
My ideas the instance should communicate with Openstack platform storage network via the vrouter provided by neutron. The vrouter gateway should communicate with Openstack platform. is or right?
I can't help you on the network side, since I don't know anything about Neutron.
Cheers, Gorka.
Gorka Eguileor <geguileo@redhat.com> 于2019年5月9日周四 下午5:28写道:
On 08/05, zack chen wrote:
Hi, I am looking for a mechanism that can be used for baremetal attach volume in a multi-tenant scenario. In addition we use ceph as the backend storage for cinder.
Can anybody give me some advice?
Hi,
Is this a stand alone Cinder deployment or a normal Cinder in OpenStack deployment?
What storage backend will you be using?
What storage protocol? iSCSI, FC, RBD...?
Depending on these you can go with Walter's suggestion of using cinderclient and its extension (which in general is the best way to go), or you may prefer writing a small python script that uses OS-Brick and makes the REST API calls directly.
Cheers, Gorka.
On 13/05, zack chen wrote:
Hi,
Thanks for your reply. I saw that ceph already has the Iscsi Gateway. Does the cinder project have such a driver?
Hi, There is an ongoing effort to write a new RBD driver specific for iSCSI, but it is not available yet. Cheers, Gorka.
Gorka Eguileor <geguileo@redhat.com> 于2019年5月10日周五 下午6:39写道:
On 10/05, zack chen wrote:
This is a normal Cinder in Openstack deployment
I'm using ceph as cinder backend, RBD drvier.
Hi,
If you are using a Ceph/RBD cluster then there are some things to take into consideration:
- You need to have the ceph-common package installed in the system.
- The images are mounted using the kernel module, so you have to be careful with the features that are enabled in the images.
- If I'm not mistaken the RBD attach using the cinderclient extension will fail if you don't have the configuration and credentials file already in the system.
My ideas the instance should communicate with Openstack platform storage network via the vrouter provided by neutron. The vrouter gateway should communicate with Openstack platform. is or right?
I can't help you on the network side, since I don't know anything about Neutron.
Cheers, Gorka.
Gorka Eguileor <geguileo@redhat.com> 于2019年5月9日周四 下午5:28写道:
On 08/05, zack chen wrote:
Hi, I am looking for a mechanism that can be used for baremetal attach volume in a multi-tenant scenario. In addition we use ceph as the backend storage for cinder.
Can anybody give me some advice?
Hi,
Is this a stand alone Cinder deployment or a normal Cinder in OpenStack deployment?
What storage backend will you be using?
What storage protocol? iSCSI, FC, RBD...?
Depending on these you can go with Walter's suggestion of using cinderclient and its extension (which in general is the best way to go), or you may prefer writing a small python script that uses OS-Brick and makes the REST API calls directly.
Cheers, Gorka.
Dear Gorka, Could you give me patch link on this work? Thank you On Mon, May 13, 2019 at 5:39 PM Gorka Eguileor <geguileo@redhat.com> wrote:
On 13/05, zack chen wrote:
Hi,
Thanks for your reply. I saw that ceph already has the Iscsi Gateway. Does the cinder project have such a driver?
Hi,
There is an ongoing effort to write a new RBD driver specific for iSCSI, but it is not available yet.
Cheers, Gorka.
Gorka Eguileor <geguileo@redhat.com> 于2019年5月10日周五 下午6:39写道:
On 10/05, zack chen wrote:
This is a normal Cinder in Openstack deployment
I'm using ceph as cinder backend, RBD drvier.
Hi,
If you are using a Ceph/RBD cluster then there are some things to take into consideration:
- You need to have the ceph-common package installed in the system.
- The images are mounted using the kernel module, so you have to be careful with the features that are enabled in the images.
- If I'm not mistaken the RBD attach using the cinderclient extension will fail if you don't have the configuration and credentials file already in the system.
My ideas the instance should communicate with Openstack platform storage network via the vrouter provided by neutron. The vrouter gateway should communicate with Openstack platform. is or right?
I can't help you on the network side, since I don't know anything about Neutron.
Cheers, Gorka.
Gorka Eguileor <geguileo@redhat.com> 于2019年5月9日周四 下午5:28写道:
On 08/05, zack chen wrote:
Hi, I am looking for a mechanism that can be used for baremetal attach volume in a multi-tenant scenario. In addition we use ceph as the backend storage for cinder.
Can anybody give me some advice?
Hi,
Is this a stand alone Cinder deployment or a normal Cinder in OpenStack deployment?
What storage backend will you be using?
What storage protocol? iSCSI, FC, RBD...?
Depending on these you can go with Walter's suggestion of using cinderclient and its extension (which in general is the best way to go), or you may prefer writing a small python script that uses OS-Brick and makes the REST API calls directly.
Cheers, Gorka.
-- Sa Pham Dang Master Student - Soongsil University Kakaotalk: sapd95 Skype: great_bn
On 13/05, Sa Pham wrote:
Dear Gorka,
Could you give me patch link on this work?
Thank you
Hi, You can see an update on the subject on the PTG's etherpad [1] starting on line 119 until line 139. There's a video [2] of a previous discussion topic and this one. Cheers, Gorka. [1]: https://etherpad.openstack.org/p/cinder-train-ptg-planning [2]: https://www.youtube.com/watch?v=N6D6ib7T9Io&feature=em-lbcastemail
On Mon, May 13, 2019 at 5:39 PM Gorka Eguileor <geguileo@redhat.com> wrote:
On 13/05, zack chen wrote:
Hi,
Thanks for your reply. I saw that ceph already has the Iscsi Gateway. Does the cinder project have such a driver?
Hi,
There is an ongoing effort to write a new RBD driver specific for iSCSI, but it is not available yet.
Cheers, Gorka.
Gorka Eguileor <geguileo@redhat.com> 于2019年5月10日周五 下午6:39写道:
On 10/05, zack chen wrote:
This is a normal Cinder in Openstack deployment
I'm using ceph as cinder backend, RBD drvier.
Hi,
If you are using a Ceph/RBD cluster then there are some things to take into consideration:
- You need to have the ceph-common package installed in the system.
- The images are mounted using the kernel module, so you have to be careful with the features that are enabled in the images.
- If I'm not mistaken the RBD attach using the cinderclient extension will fail if you don't have the configuration and credentials file already in the system.
My ideas the instance should communicate with Openstack platform storage network via the vrouter provided by neutron. The vrouter gateway should communicate with Openstack platform. is or right?
I can't help you on the network side, since I don't know anything about Neutron.
Cheers, Gorka.
Gorka Eguileor <geguileo@redhat.com> 于2019年5月9日周四 下午5:28写道:
On 08/05, zack chen wrote: > Hi, > I am looking for a mechanism that can be used for baremetal attach volume > in a multi-tenant scenario. In addition we use ceph as the backend storage > for cinder. > > Can anybody give me some advice?
Hi,
Is this a stand alone Cinder deployment or a normal Cinder in OpenStack deployment?
What storage backend will you be using?
What storage protocol? iSCSI, FC, RBD...?
Depending on these you can go with Walter's suggestion of using cinderclient and its extension (which in general is the best way to go), or you may prefer writing a small python script that uses OS-Brick and makes the REST API calls directly.
Cheers, Gorka.
-- Sa Pham Dang Master Student - Soongsil University Kakaotalk: sapd95 Skype: great_bn
Thanks, I'll check it out. On Mon, May 13, 2019 at 8:14 PM Gorka Eguileor <geguileo@redhat.com> wrote:
On 13/05, Sa Pham wrote:
Dear Gorka,
Could you give me patch link on this work?
Thank you
Hi,
You can see an update on the subject on the PTG's etherpad [1] starting on line 119 until line 139. There's a video [2] of a previous discussion topic and this one.
Cheers, Gorka.
[1]: https://etherpad.openstack.org/p/cinder-train-ptg-planning [2]: https://www.youtube.com/watch?v=N6D6ib7T9Io&feature=em-lbcastemail
On Mon, May 13, 2019 at 5:39 PM Gorka Eguileor <geguileo@redhat.com>
wrote:
On 13/05, zack chen wrote:
Hi,
Thanks for your reply. I saw that ceph already has the Iscsi Gateway. Does the cinder
project
have
such a driver?
Hi,
There is an ongoing effort to write a new RBD driver specific for iSCSI, but it is not available yet.
Cheers, Gorka.
Gorka Eguileor <geguileo@redhat.com> 于2019年5月10日周五 下午6:39写道:
On 10/05, zack chen wrote:
This is a normal Cinder in Openstack deployment
I'm using ceph as cinder backend, RBD drvier.
Hi,
If you are using a Ceph/RBD cluster then there are some things to take into consideration:
- You need to have the ceph-common package installed in the system.
- The images are mounted using the kernel module, so you have to be careful with the features that are enabled in the images.
- If I'm not mistaken the RBD attach using the cinderclient extension will fail if you don't have the configuration and credentials file already in the system.
My ideas the instance should communicate with Openstack platform storage network via the vrouter provided by neutron. The vrouter gateway should communicate with Openstack platform. is or right?
I can't help you on the network side, since I don't know anything about Neutron.
Cheers, Gorka.
Gorka Eguileor <geguileo@redhat.com> 于2019年5月9日周四 下午5:28写道:
> On 08/05, zack chen wrote: > > Hi, > > I am looking for a mechanism that can be used for baremetal attach volume > > in a multi-tenant scenario. In addition we use ceph as the backend > storage > > for cinder. > > > > Can anybody give me some advice? > > Hi, > > Is this a stand alone Cinder deployment or a normal Cinder in OpenStack > deployment? > > What storage backend will you be using? > > What storage protocol? iSCSI, FC, RBD...? > > Depending on these you can go with Walter's suggestion of using > cinderclient and its extension (which in general is the best way to go), > or you may prefer writing a small python script that uses OS-Brick and > makes the REST API calls directly. > > Cheers, > Gorka. >
-- Sa Pham Dang Master Student - Soongsil University Kakaotalk: sapd95 Skype: great_bn
-- Sa Pham Dang Master Student - Soongsil University Kakaotalk: sapd95 Skype: great_bn
participants (4)
-
Gorka Eguileor
-
Sa Pham
-
Walter Boring
-
zack chen