[Openstack] [openstack-dev] [kolla] ceph osd deploy fails

Eduardo Gonzalez dabarren at gmail.com
Wed Sep 26 13:45:15 UTC 2018


CC openstack so others can see the thread

El mié., 26 sept. 2018 a las 15:44, Eduardo Gonzalez (<dabarren at gmail.com>)
escribió:

> Hi, i'm not sure at this moment at what your issue may be. Using external
> ceph with kolla-ansible is supported.
> Just to make sure, rocky is not released yet in kolla/-ansible, only a
> release candidate and a proposal for release candidate 2 this week.
>
> To dig more into your issue, what are your config? Anything out of the box
> in the servers? What steps was made to define the osd disks?
>
> Regards
>
> El mié., 26 sept. 2018 a las 15:08, Florian Engelmann (<
> florian.engelmann at everyware.ch>) escribió:
>
>> Dear Eduardo,
>>
>> thank you for your fast response! I recognized those fixes and we are
>> using stable/rocky from yesterday because of those commits (using the
>> tarballs - not the git repository).
>>
>> I guess you are talking about:
>>
>> https://github.com/openstack/kolla-ansible/commit/ef6921e6d7a0922f68ffb05bd022aab7c2882473
>>
>> I saw that one in kolla as well:
>>
>> https://github.com/openstack/kolla/commit/60f0ea10bfdff12d847d9cb3b51ce02ffe96d6e1
>>
>> So we are using Ceph 12.2.4 right now and everything up to 24th of
>> september in stable/rocky.
>>
>> Anything else we could test/change?
>>
>> We are at the point to deploy ceph seperated from kolla (using
>> ceph-ansible) because we need a working environment tomorrow. Do you see
>> a real chance get ceph via kolla-ansible up and running today?
>>
>>
>> All the best,
>> Flo
>>
>>
>>
>>
>> Am 26.09.18 um 14:44 schrieb Eduardo Gonzalez:
>> > Hi, what version of rocky are you using. Maybe was in the middle of a
>> > backport which temporally broke ceph.
>> >
>> > Could you try latest stable/rocky branch?
>> >
>> > It is now working properly.
>> >
>> > Regards
>> >
>> > On Wed, Sep 26, 2018, 2:32 PM Florian Engelmann
>> > <florian.engelmann at everyware.ch <mailto:florian.engelmann at everyware.ch>>
>>
>> > wrote:
>> >
>> >     Hi,
>> >
>> >     I tried to deploy Rocky in a multinode setup but ceph-osd fails
>> with:
>> >
>> >
>> >     failed: [xxxxxxxxxxx-poc2] (item=[0, {u'fs_uuid': u'',
>> u'bs_wal_label':
>> >     u'', u'external_journal': False, u'bs_blk_label': u'',
>> >     u'bs_db_partition_num': u'', u'journal_device': u'', u'journal':
>> u'',
>> >     u'partition': u'/dev/nvme0n1', u'bs_wal_partition_num': u'',
>> >     u'fs_label': u'', u'journal_num': 0, u'bs_wal_device': u'',
>> >     u'partition_num': u'1', u'bs_db_label': u'',
>> u'bs_blk_partition_num':
>> >     u'', u'device': u'/dev/nvme0n1', u'bs_db_device': u'',
>> >     u'partition_label': u'KOLLA_CEPH_OSD_BOOTSTRAP_BS',
>> u'bs_blk_device':
>> >     u''}]) => {
>> >           "changed": true,
>> >           "item": [
>> >               0,
>> >               {
>> >                   "bs_blk_device": "",
>> >                   "bs_blk_label": "",
>> >                   "bs_blk_partition_num": "",
>> >                   "bs_db_device": "",
>> >                   "bs_db_label": "",
>> >                   "bs_db_partition_num": "",
>> >                   "bs_wal_device": "",
>> >                   "bs_wal_label": "",
>> >                   "bs_wal_partition_num": "",
>> >                   "device": "/dev/nvme0n1",
>> >                   "external_journal": false,
>> >                   "fs_label": "",
>> >                   "fs_uuid": "",
>> >                   "journal": "",
>> >                   "journal_device": "",
>> >                   "journal_num": 0,
>> >                   "partition": "/dev/nvme0n1",
>> >                   "partition_label": "KOLLA_CEPH_OSD_BOOTSTRAP_BS",
>> >                   "partition_num": "1"
>> >               }
>> >           ]
>> >     }
>> >
>> >     MSG:
>> >
>> >     Container exited with non-zero return code 2
>> >
>> >     We tried to debug the error message by starting the container with a
>> >     modified endpoint but we are stuck at the following point right now:
>> >
>> >
>> >     docker run  -e "HOSTNAME=10.0.153.11" -e "JOURNAL_DEV=" -e
>> >     "JOURNAL_PARTITION=" -e "JOURNAL_PARTITION_NUM=0" -e
>> >     "KOLLA_BOOTSTRAP=null" -e "KOLLA_CONFIG_STRATEGY=COPY_ALWAYS" -e
>> >     "KOLLA_SERVICE_NAME=bootstrap-osd-0" -e "OSD_BS_BLK_DEV=" -e
>> >     "OSD_BS_BLK_LABEL=" -e "OSD_BS_BLK_PARTNUM=" -e "OSD_BS_DB_DEV=" -e
>> >     "OSD_BS_DB_LABEL=" -e "OSD_BS_DB_PARTNUM=" -e
>> "OSD_BS_DEV=/dev/nvme0n1"
>> >     -e "OSD_BS_LABEL=KOLLA_CEPH_OSD_BOOTSTRAP_BS" -e "OSD_BS_PARTNUM=1"
>> -e
>> >     "OSD_BS_WAL_DEV=" -e "OSD_BS_WAL_LABEL=" -e "OSD_BS_WAL_PARTNUM=" -e
>> >     "OSD_DEV=/dev/nvme0n1" -e "OSD_FILESYSTEM=xfs" -e
>> >     "OSD_INITIAL_WEIGHT=1"
>> >     -e "OSD_PARTITION=/dev/nvme0n1" -e "OSD_PARTITION_NUM=1" -e
>> >     "OSD_STORETYPE=bluestore" -e "USE_EXTERNAL_JOURNAL=false"   -v
>> >     "/etc/kolla//ceph-osd/:/var/lib/kolla/config_files/:ro" -v
>> >     "/etc/localtime:/etc/localtime:ro" -v "/dev/:/dev/" -v
>> >     "kolla_logs:/var/log/kolla/" -ti --privileged=true --entrypoint
>> >     /bin/bash
>> >
>> 10.0.128.7:5000/openstack/openstack-kolla-cfg/ubuntu-source-ceph-osd:7.0.0.3
>> >     <
>> http://10.0.128.7:5000/openstack/openstack-kolla-cfg/ubuntu-source-ceph-osd:7.0.0.3
>> >
>> >
>> >
>> >
>> >     cat /var/lib/kolla/config_files/ceph.client.admin.keyring >
>> >     /etc/ceph/ceph.client.admin.keyring
>> >
>> >
>> >     cat /var/lib/kolla/config_files/ceph.conf > /etc/ceph/ceph.conf
>> >
>> >
>> >     (bootstrap-osd-0)[root at 985e2dee22bc /]# /usr/bin/ceph-osd -d
>> >     --public-addr 10.0.153.11 --cluster-addr 10.0.153.11
>> >     usage: ceph-osd -i <ID> [flags]
>> >         --osd-data PATH data directory
>> >         --osd-journal PATH
>> >                           journal file or block device
>> >         --mkfs            create a [new] data directory
>> >         --mkkey           generate a new secret key. This is normally
>> >     used in
>> >     combination with --mkfs
>> >         --convert-filestore
>> >                           run any pending upgrade operations
>> >         --flush-journal   flush all data out of journal
>> >         --mkjournal       initialize a new journal
>> >         --check-wants-journal
>> >                           check whether a journal is desired
>> >         --check-allows-journal
>> >                           check whether a journal is allowed
>> >         --check-needs-journal
>> >                           check whether a journal is required
>> >         --debug_osd <N>   set debug level (e.g. 10)
>> >         --get-device-fsid PATH
>> >                           get OSD fsid for the given block device
>> >
>> >         --conf/-c FILE    read configuration from the given
>> >     configuration file
>> >         --id/-i ID        set ID portion of my name
>> >         --name/-n TYPE.ID <http://TYPE.ID> set name
>> >         --cluster NAME    set cluster name (default: ceph)
>> >         --setuser USER    set uid to user or uid (and gid to user's gid)
>> >         --setgroup GROUP  set gid to group or gid
>> >         --version         show version and quit
>> >
>> >         -d                run in foreground, log to stderr.
>> >         -f                run in foreground, log to usual location.
>> >         --debug_ms N      set message debug level (e.g. 1)
>> >     2018-09-26 12:28:07.801066 7fbda64b4e40  0 ceph version 12.2.4
>> >     (52085d5249a80c5f5121a76d6288429f35e4e77b) luminous (stable),
>> process
>> >     (unknown), pid 46
>> >     2018-09-26 12:28:07.801078 7fbda64b4e40 -1 must specify '-i #'
>> where #
>> >     is the osd number
>> >
>> >
>> >     But it looks like "-i" is not set anywere?
>> >
>> >     grep command
>> >
>>  /opt/stack/kolla-ansible/ansible/roles/ceph/templates/ceph-osd.json.j2
>> >     "command": "/usr/bin/ceph-osd -f --public-addr {{
>> >     hostvars[inventory_hostname]['ansible_' +
>> >     storage_interface]['ipv4']['address'] }} --cluster-addr {{
>> >     hostvars[inventory_hostname]['ansible_' +
>> >     cluster_interface]['ipv4']['address'] }}",
>> >
>> >     What's wrong with our setup?
>> >
>> >     All the best,
>> >     Flo
>> >
>> >
>> >     --
>> >
>> >     EveryWare AG
>> >     Florian Engelmann
>> >     Systems Engineer
>> >     Zurlindenstrasse 52a
>> >     CH-8003 Zürich
>> >
>> >     tel: +41 44 466 60 00
>> >     fax: +41 44 466 60 10
>> >     mail: mailto:florian.engelmann at everyware.ch
>> >     <mailto:florian.engelmann at everyware.ch>
>> >     web: http://www.everyware.ch
>> >
>>  __________________________________________________________________________
>> >     OpenStack Development Mailing List (not for usage questions)
>> >     Unsubscribe:
>> >     OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> >     <
>> http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe>
>> >     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>> >
>> >
>> __________________________________________________________________________
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>> --
>>
>> EveryWare AG
>> Florian Engelmann
>> Systems Engineer
>> Zurlindenstrasse 52a
>> CH-8003 Zürich
>>
>> tel: +41 44 466 60 00
>> fax: +41 44 466 60 10
>> mail: mailto:florian.engelmann at everyware.ch
>> web: http://www.everyware.ch
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20180926/76a02adb/attachment.html>


More information about the Openstack mailing list