[Openstack] Swift broken pipe

Kuo Hugo tonytkdk at gmail.com
Tue Jul 14 15:52:36 UTC 2015


Hi Heiko,

This command contains worker counts already. No need to start workers
manually by using ssbench-worker.

ssbench-master run-scenario -f large.scenario -u 200 -o 4000 --workers 4


   - Ensure there’s no any workers “$ps aux | grep ssbench”
   - Try to kill workers “$ssbench-master kill-workers”
   - Try to start with lower “user count” (-u) and less operation
   counts(-o).

ssbench-master run-scenario -f large.scenario -u 10 -r 30 —workers 1

{
  "name": "Small test scenario",
  "sizes": [{
    "name": "tiny",
    "size_min": 100000,
    "size_max": 160000
  }, {
    "name": "small",
    "size_min": 400000,
    "size_max": 4000000
  }],
  "initial_files": {
    "tiny": 100
  },
  "operation_count": 1000,
  "crud_profile": [1, 0, 0, 0],
  "user_count": 5,
  "container_base": "ssbench",
  "container_count": 100,
  "container_concurrency": 100
}

Hugo
​

2015-07-14 16:55 GMT+08:00 Heiko Krämer <kraemer at avarteq.de>:

>
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
> Hi guys,
>
> first of all, sorry for this cross post but it seems very critical:
> https://ask.openstack.org/en/question/78403/swift-broken-pipe/
>
> I'm running in a very strange issue if i'm using ssbench to test my
> cluster.
>
>   * Swift 2.2.2
>   * 2 Proxy Nodes (64Gig RAM, 10G interfaces, 16 cores)
>   * 3 Storage nodes (12 SATA => Object, SSD => Container/Acccount, 10G
> interfaces, 8Cors)
>   * L3 Keepalived LB
>   * Ubuntu 14.04
>   * kernel 3.19.x
>
> Memcache is installed on both Proxy nodes. disperation-check is working
> very well, ssbench isn't.
>
> ssbench is located on another server with 1G connectivity!
>
> |{
>   "name": "Small test scenario",
>   "sizes": [{
>     "name": "tiny",
>     "size_min": 100000,
>     "size_max": 160000
>   }, {
>     "name": "small",
>     "size_min": 400000,
>     "size_max": 4000000
>   }],
>   "initial_files": {
>     "tiny": 100,
>     "small": 20
>   },
>   "operation_count": 1000,
>   "crud_profile": [4, 3, 2, 2],
>   "user_count": 5,
>   "container_base": "ssbench",
>   "container_count": 100,
>   "container_concurrency": 100
> }
>
> sbench-master run-scenario -f large.scenario -u 200 -o 4000 --workers 4|
>
> Starting workers
>
> |/usr/local/bin/ssbench-worker -c 200 --zmq-host 10.0.0.4 -c 50
> --batch-size 8 4|
>
> Log output Proxy nodes:
>
> |Jul 14 08:35:27 proxy1 swift: ERROR with Object server
> 192.168.100.7:6000/sdf re: Trying to write to
> /v1/AUTH_a12e7b67dca043cba5eb395b6346b0a4/ssbench_000047/small_002615:
> #012Traceback (most recent call last):#012  File
> "/usr/lib/python2.7/dist-packages/swift/proxy/controllers/obj.py", line
> 317, in _send_file#012    conn.send(chunk)#012  File
> "/usr/lib/python2.7/httplib.py", line 811, in send#012
> self.sock.sendall(data)#012  File
> "/usr/lib/python2.7/dist-packages/eventlet/greenio.py", line 376, in
> sendall#012    tail = self.send(data, flags)#012  File
> "/usr/lib/python2.7/dist-packages/eventlet/greenio.py", line 358, in
> send#012    total_sent += fd.send(data[total_sent:], flags)#012error:
> [Errno 32] Broken pipe|
>
> Object-Server count on each storage node: 28
>
> I tested this scenario without the loadbalancer to check if anything is
> wrong with the loadbalancer but without success.
>
> I'm searching since days to solve this problem but without success :(
>
> If you need more informations, please let me know.
>
> Cheers
> Heiko
>
> - --
> B. Sc. Informatik
> Heiko Krämer
> CIO/Administrator
>
> Twitter: @railshoster
> Avarteq GmbH
> Zweigstelle:
> Prinzessinnenstr. 20, 10969 Berlin
>
> - ----
> Geschäftsführer: Alexander Faißt, Dipl.-Inf.(FH) Julian Fischer
> Handelsregister: AG Saarbrücken HRB 17413, Ust-IdNr.: DE262633168
> Sitz:
> Science Park 2
> 66123 Saarbrücken
>
> Tel: +49 (0)681 / 309 64 190
> Fax: +49 (0)681 / 309 64 191
>
> Visit:
> http://www.enterprise-rails.de/
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG v1
>
> iQEcBAEBAgAGBQJVpM5tAAoJELxFogM4ixOFXikIAKmtcdTRjFlPysXGUrCfhyl+
> e17s6dSlmxZWfwWriAusdbhhjMSnZtrcmE7g0HplNrR720xEVu9oQDLPnHblU2Cu
> +MCwI3F6A5p4ZliFQtwUikKxyGwQzc/4nJpCY7UY4Vg2UkWfcCX6TlfEykbih/yA
> 7m9oZahSXktaZVwDe6oggs6zU8GBaL7ecoojgMZn3Mb4Pr2J2Mxfs60V9/yF2//x
> 3yD4/9HKYcI+D4hE5ZEVETh2hnEvJZiM63txxwPN2zYvgjlJK2WDHQORoxlBrN0S
> IwxA2GFWDR2G0PxJJbYzl3Xr3hbgWTjp4Hey0nXClvagf84obV0mjbhbtPXz6iM=
> =Qr5Q
> -----END PGP SIGNATURE-----
>
>
> _______________________________________________
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to     : openstack at lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20150714/5f62b63d/attachment.html>


More information about the Openstack mailing list