[Openstack] about block storage

Vinay Venkataraghavan vinayvenkat at gmail.com
Fri Nov 15 19:44:04 UTC 2013


iscsi is sufficiently performant. I have done extensive testing between
iscsi and nfs etc for VM storage. iscsi is very very fast.

Ofcourse, the caveat is how your network is also configured. You should
definitely perform all the tests that have been recommended in this thread
to determine where the bottleneck could be.

On another note, I would love to hear your experiences using Ceph. I've
heard that its great in theory but in practice its really not that
performant. Any performance numbers would be great.

Thanks,
- V


On Fri, Nov 15, 2013 at 3:37 AM, Antonio Messina <
antonio.s.messina at gmail.com> wrote:

> I would test the effective network bandwith with iperf.
>
> Then I would test the speed of the cinder backend by mounting the
> volume locally and running, for instance, iozone.
>
> Finally, I would test the performance of the iSCSI backend by mounting
> the volume on the compute node (or another node) via iSCSI and running
> iozone on top of it.
>
> But I have to admint I don't have much experience with iSCSI :)
>
> .a.
>
> On Fri, Nov 15, 2013 at 3:30 AM, Jitendra Kumar Bhaskar
> <jitendra.b at pramati.com> wrote:
> > Please make sure from block storage to switch and from switch to compute
> > node connectivity is in GBs. Means all ports should be GBs.
> >
> > Regards
> > Jitendra Bhaskar
> >
> >
> >
> >
> >
> >
> > On Fri, Nov 15, 2013 at 6:20 AM, Dnsbed Ops <ops at dnsbed.com> wrote:
> >>
> >> They are 1000Mb links.
> >>
> >>
> >> On 2013-11-15 1:40, Razique Mahroua wrote:
> >>>
> >>> Hi,
> >>>
> >>> What is the network link between both?
> >>>
> >>> On 13 Nov 2013, at 19:38, Dnsbed Ops wrote:
> >>>
> >>>> Hi,
> >>>>
> >>>> The design is pretty simple.
> >>>> We run nova-compute for VMs management, for example, a server with
> >>>> 128GB memory, 12 cores CPU, 300GB SAS (RAID1) disks, to create 20 VMs.
> >>>> And, we run cinder as the separated storage service to provide block
> >>>> storage for the VMs. For example, each VM gets a block storage with
> >>>> 100GB.
> >>>>
> >>>> More details,
> >>>>
> >>>> OS: ubuntu 12.04
> >>>> Hypervisor: KVM
> >>>> Networking: nova-network for FlatDHCP,multi-host
> >>>> glance backend: file
> >>>> cinder backend: ceph
> >>>> live migration: no need
> >>>> object storage: no need
> >>>>
> >>>> Thanks.
> >>>>
> >>>> On 2013-11-14 11:09, Razique Mahroua wrote:
> >>>>>
> >>>>> #2: What's the implementation design?
> >>>
> >>>
> >>
> >> _______________________________________________
> >> Mailing list:
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> >> Post to     : openstack at lists.openstack.org
> >> Unsubscribe :
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> >
> >
> >
> > _______________________________________________
> > Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> > Post to     : openstack at lists.openstack.org
> > Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> >
>
>
>
> --
> antonio.s.messina at gmail.com
> antonio.messina at uzh.ch                     +41 (0)44 635 42 22
> GC3: Grid Computing Competence Center      http://www.gc3.uzh.ch/
> University of Zurich
> Winterthurerstrasse 190
> CH-8057 Zurich Switzerland
>
> _______________________________________________
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to     : openstack at lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20131115/4c3d558a/attachment.html>


More information about the Openstack mailing list