[Openstack] Looking for some help with swift

Mayur Patil ram.nath241089 at gmail.com
Tue May 20 06:26:09 UTC 2014


I don't know what you are exactly missing, but

*have you created files test.txt and test2.txt?*

404 mostly for these things.

I think this should be the reason; may be I am wrong,

Thanks !

-- 

*Cheers,Mayur* S. Patil,
Seeking for S/W Engg. Position,
Pune.


On 20 May 2014 11:02, Kuo Hugo <tonytkdk at gmail.com> wrote:

> Hi Clint,
>
> No it's not necessary to separate a cluster into several zones in general.
> The region and zone information will be invoked by ring-builder to assign
> partitions as-unique-as possible.
> If you do have several nodes be placed in a remote datacenter with higher
> network latency, then these nodes should be in a different region.
> If you do have several nodes in different room but in same building with
> low network latency, you can indicate these nodes are in different zone.
>
> So basically, for all nodes in same rack, room, DC. You can simply to use
> a single zone without any problem.
>
> I observed the replica been set to 1 only. Is this a production
> environment ?  In current Swift implementation, the replica number is not
> able to change dynamically(Storage policy will make it more flexible). You
> may want to have at least 3 replicas in production cluster.
>
> For part power, that's fine with higher value. The impact is on the
> performance and memory consumption. If your cluster will expand to more
> nodes with over 500TB capacity in the future, you definitely don't want
> part power 8. It's not hurt with current value. Don't to worry about it for
> a testing cluster.
>
>
> Cheers // Hugo
>
>
>
> 2014-05-20 4:48 GMT+08:00 Clint Dilks <clintd at waikato.ac.nz>:
>
> Thanks Hugo,
>>
>> That did help I have a combined proxy and storage node up and running
>> with following rings now to scale things out to more storage nodes.  From
>> the documentation its not clear to me but I believe as I add storage nodes
>> I should create them each in a separate zone ?
>>
>> Based on a Swift Partition Power calculator I found I believe I should
>> recreate things using a part power of 8
>> http://rackerlabs.github.io/swift-ppc/
>>
>> [root at comet swift]# swift-ring-builder account.builder
>> account.builder, build version 1
>> 262144 partitions, 1.000000 replicas, 1 regions, 1 zones, 1 devices, 0.00
>> balance
>> The minimum number of hours before a partition can be reassigned is 1
>> Devices:    id  region  zone      ip address  port  replication ip
>> replication port      name weight partitions balance meta
>>              0       1     1    130.217.78.2  6002
>> 130.217.78.2              6002      sda4 100.00     262144    0.00
>>
>> [root at comet swift]# swift-ring-builder container.builder
>> container.builder, build version 1
>> 262144 partitions, 1.000000 replicas, 1 regions, 1 zones, 1 devices, 0.00
>> balance
>> The minimum number of hours before a partition can be reassigned is 1
>> Devices:    id  region  zone      ip address  port  replication ip
>> replication port      name weight partitions balance meta
>>              0       1     1    130.217.78.2  6001
>> 130.217.78.2              6001      sda4 100.00     262144    0.00
>>
>> [root at comet swift]# swift-ring-builder object.builder
>> object.builder, build version 1
>> 262144 partitions, 1.000000 replicas, 1 regions, 1 zones, 1 devices, 0.00
>> balance
>> The minimum number of hours before a partition can be reassigned is 1
>> Devices:    id  region  zone      ip address  port  replication ip
>> replication port      name weight partitions balance meta
>>              0       1     1    130.217.78.2  6000
>> 130.217.78.2              6000      sda4 100.00     262144    0.00
>>
>>
>>
>>
>> On Mon, May 19, 2014 at 4:36 PM, Kuo Hugo <tonytkdk at gmail.com> wrote:
>>
>>> Hi Client,
>>>
>>> Two problems :
>>>
>>> 1. Those rings are incorrect. Your rings indicate all workers are
>>> listening on same port 6002. That's why the PUT request of container was
>>> handled by account-server in your log.
>>> 2. You need at least 3 devices for 3 replicas testing.
>>>
>>> [root at comet swift]# swift-ring-builder account.builder
>>> account.builder, build version 1
>>> 262144 partitions, 3.000000 replicas, 1 regions, 1 zones, 1 devices, 0.00 balance
>>> The minimum number of hours before a partition can be reassigned is 1
>>> Devices:    id  region  zone      ip address  port  replication ip  replication port      name weight partitions balance meta
>>>              0       1     1    130.217.78.2  6002    130.217.78.2              6005      sda4 100.00     786432    0.00
>>>
>>> [root at comet swift]#  swift-ring-builder container.builder
>>> container.builder, build version 1
>>> 262144 partitions, 3.000000 replicas, 1 regions, 1 zones, 1 devices, 0.00 balance
>>> The minimum number of hours before a partition can be reassigned is 1
>>> Devices:    id  region  zone      ip address  port  replication ip  replication port      name weight partitions balance meta
>>>              0       1     1    130.217.78.2  6002    130.217.78.2              6005      sda4 100.00     786432    0.00
>>>
>>> [root at comet swift]# swift-ring-builder object.builder
>>> object.builder, build version 1
>>> 262144 partitions, 3.000000 replicas, 1 regions, 1 zones, 1 devices, 0.00 balance
>>> The minimum number of hours before a partition can be reassigned is 1
>>> Devices:    id  region  zone      ip address  port  replication ip  replication port      name weight partitions balance meta
>>>              0       1     1    130.217.78.2  6002    130.217.78.2              6005      sda4 100.00     786432    0.00
>>>
>>>
>>> Hope it help
>>>
>>> Hugo Kuo
>>>
>>>
>>>
>>>
>>>
>>> 2014-05-19 0:31 GMT-04:00 Clint Dilks <clintd at waikato.ac.nz>:
>>>
>>> Hi Hugo,
>>>>
>>>> Thanks for responding.
>>>>
>>>> http://paste.openstack.org/show/80857/
>>>>
>>>> Please let me know if the swift-ring-builder information is not what
>>>> you need in relation to rings.
>>>>
>>>> My Long term goal is 3 storage nodes with one proxy but as I am
>>>> currently having issue I simplified this to 1 proxy and 1 storage node
>>>> running on the same host.
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> On Mon, May 19, 2014 at 4:11 PM, Kuo Hugo <tonytkdk at gmail.com> wrote:
>>>>
>>>>> Hi Clint,
>>>>>
>>>>> Would you please paster the proxy-server.conf and rings on
>>>>> http://paste.openstack.org/  ?
>>>>> Also please show me the output of $>sudo ls -al /srv/node/sda4
>>>>>
>>>>> Thanks // Hugo
>>>>>
>>>>>
>>>>> 2014-05-18 23:45 GMT-04:00 Clint Dilks <clintd at waikato.ac.nz>:
>>>>>
>>>>> Nope,
>>>>>>
>>>>>> This install is pointing to the RDO repositories as per earlier in
>>>>>> the installation guide, but does not use packstack.
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Mon, May 19, 2014 at 3:38 PM, Remo Mattei <remo at italy1.com> wrote:
>>>>>>
>>>>>>> Hi did u use packstack ?
>>>>>>>
>>>>>>> Inviato da iPhone ()
>>>>>>>
>>>>>>> Il giorno May 18, 2014, alle ore 20:22, Clint Dilks <
>>>>>>> clintd at waikato.ac.nz> ha scritto:
>>>>>>>
>>>>>>> Hi I am installing icehouse on CentOS 6.5 for the first time and
>>>>>>> looking for some help with swift.
>>>>>>>
>>>>>>> I have followed the guide here
>>>>>>> http://docs.openstack.org/icehouse/install-guide/install/yum/content/verify-object-storage-installation.html
>>>>>>>
>>>>>>> Currently swift stat appears to be working but uploading files
>>>>>>> fails.
>>>>>>>
>>>>>>> [root at comet swifttest]# swift stat
>>>>>>>        Account: AUTH_d39dfee7f2ce4a86b8721365805eb858
>>>>>>>     Containers: 0
>>>>>>>        Objects: 0
>>>>>>>          Bytes: 0
>>>>>>>  Accept-Ranges: bytes
>>>>>>>    X-Timestamp: 1400466597.29362
>>>>>>>     X-Trans-Id: txa6295ab356d94e7baede8-0053797190
>>>>>>>   Content-Type: text/plain; charset=utf-8
>>>>>>>
>>>>>>> [root at comet swifttest]# swift upload myfiles test.txt
>>>>>>> Error trying to create container 'myfiles': 404 Not Found:
>>>>>>> <html><h1>Not Found</h1><p>The resource could not be found.<
>>>>>>> Object HEAD failed:
>>>>>>> http://comet.cms.waikato.ac.nz:8080:8080/v1/AUTH_d39dfee7f2ce4a86b8721365805eb858/myfiles/test.txt400 Bad Request
>>>>>>>
>>>>>>> Looking in the Logs I see the following, which doesn't give me a
>>>>>>> clue as to the issue.
>>>>>>>
>>>>>>> Any thoughts as to what the problem might be or how to diagnose the
>>>>>>> problem further would be appreciated.
>>>>>>>
>>>>>>> Thanks
>>>>>>>
>>>>>>> May 19 15:10:15 comet container-replicator: Beginning replication run
>>>>>>> May 19 15:10:15 comet container-replicator: Replication run OVER
>>>>>>> May 19 15:10:15 comet container-replicator: Attempted to replicate 0
>>>>>>> dbs in 0.00175 seconds (0.00000/s)
>>>>>>> May 19 15:10:15 comet container-replicator: Removed 0 dbs
>>>>>>> May 19 15:10:15 comet container-replicator: 0 successes, 0 failures
>>>>>>> May 19 15:10:15 comet container-replicator: no_change:0 ts_repl:0
>>>>>>> diff:0 rsync:0 diff_capped:0 hashmatch:0 empty:0
>>>>>>> May 19 15:10:16 comet object-replicator: Starting object replication
>>>>>>> pass.
>>>>>>> May 19 15:10:16 comet object-replicator: Nothing replicated for
>>>>>>> 0.00157809257507 seconds.
>>>>>>> May 19 15:10:16 comet object-replicator: Object replication
>>>>>>> complete. (0.00 minutes)
>>>>>>> May 19 15:10:16 comet object-auditor: Begin object audit "forever"
>>>>>>> mode (ZBF)
>>>>>>> May 19 15:10:16 comet object-auditor: Begin object audit "forever"
>>>>>>> mode (ALL)
>>>>>>> May 19 15:10:16 comet object-auditor: Object audit (ZBF) "forever"
>>>>>>> mode completed: 0.00s. Total quarantined: 0, Total errors: 0, Total
>>>>>>> files/sec: 0.00, Total bytes/sec: 0.00, Auditing time: 0.00, Rate: 0.00
>>>>>>> May 19 15:10:16 comet object-auditor: Object audit (ALL) "forever"
>>>>>>> mode completed: 0.00s. Total quarantined: 0, Total errors: 0, Total
>>>>>>> files/sec: 0.00, Total bytes/sec: 0.00, Auditing time: 0.00, Rate: 0.00
>>>>>>> May 19 15:10:36 comet account-replicator: Beginning replication run
>>>>>>> May 19 15:10:36 comet account-replicator: Replication run OVER
>>>>>>> May 19 15:10:36 comet account-replicator: Attempted to replicate 0
>>>>>>> dbs in 0.00179 seconds (0.00000/s)
>>>>>>> May 19 15:10:36 comet account-replicator: Removed 0 dbs
>>>>>>> May 19 15:10:36 comet account-replicator: 0 successes, 0 failures
>>>>>>> May 19 15:10:36 comet account-replicator: no_change:0 ts_repl:0
>>>>>>> diff:0 rsync:0 diff_capped:0 hashmatch:0 empty:0
>>>>>>> May 19 15:10:40 comet account-server: 130.217.78.2 - -
>>>>>>> [19/May/2014:03:10:40 +0000] "HEAD
>>>>>>> /sda4/57207/AUTH_d39dfee7f2ce4a86b8721365805eb858" 204 -
>>>>>>> "tx6aa17b19d2974be088b8a-0053797630" "HEAD
>>>>>>> http://comet.cms.waikato.ac.nz:8080/v1/AUTH_d39dfee7f2ce4a86b8721365805eb858"
>>>>>>> "proxy-server 113431" 0.0029 ""
>>>>>>> May 19 15:10:40 comet account-server: 130.217.78.2 - -
>>>>>>> [19/May/2014:03:10:40 +0000] "PUT
>>>>>>> /sda4/71034/AUTH_d39dfee7f2ce4a86b8721365805eb858/clint" 404 -
>>>>>>> "tx6aa17b19d2974be088b8a-0053797630" "PUT
>>>>>>> http://comet.cms.waikato.ac.nz:8080/v1/AUTH_d39dfee7f2ce4a86b8721365805eb858/clint"
>>>>>>> "proxy-server 113431" 0.0004 ""
>>>>>>> May 19 15:10:41 comet account-server: 130.217.78.2 - -
>>>>>>> [19/May/2014:03:10:41 +0000] "HEAD
>>>>>>> /sda4/71034/AUTH_d39dfee7f2ce4a86b8721365805eb858/clint" 400 69
>>>>>>> "tx453bedf548144a8b8518c-0053797631" "HEAD
>>>>>>> http://comet.cms.waikato.ac.nz:8080/v1/AUTH_d39dfee7f2ce4a86b8721365805eb858/clint"
>>>>>>> "proxy-server 113431" 0.0014 ""
>>>>>>> May 19 15:10:41 comet account-server: 130.217.78.2 - -
>>>>>>> [19/May/2014:03:10:41 +0000] "HEAD
>>>>>>> /sda4/22389/AUTH_d39dfee7f2ce4a86b8721365805eb858/clint/blob.txt" 400 78
>>>>>>> "tx453bedf548144a8b8518c-0053797631" "HEAD
>>>>>>> http://comet.cms.waikato.ac.nz:8080/v1/AUTH_d39dfee7f2ce4a86b8721365805eb858/clint/blob.txt"
>>>>>>> "proxy-server 113431" 0.0003 ""
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> !DSPAM:1,53797b95200128318916264!
>>>>>>>
>>>>>>> _______________________________________________
>>>>>>> Mailing list:
>>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>>>>>> Post to     : openstack at lists.openstack.org
>>>>>>> Unsubscribe :
>>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>>>>>>
>>>>>>>
>>>>>>> !DSPAM:1,53797b95200128318916264!
>>>>>>>
>>>>>>>
>>>>>>
>>>>>> _______________________________________________
>>>>>> Mailing list:
>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>>>>> Post to     : openstack at lists.openstack.org
>>>>>> Unsubscribe :
>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>
> _______________________________________________
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to     : openstack at lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20140520/a249e5f2/attachment.html>


More information about the Openstack mailing list