<div dir="ltr">CC openstack so others can see the thread<br><br><div class="gmail_quote"><div dir="ltr">El mié., 26 sept. 2018 a las 15:44, Eduardo Gonzalez (<<a href="mailto:dabarren@gmail.com">dabarren@gmail.com</a>>) escribió:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div>Hi, i'm not sure at this moment at what your issue may be. Using external ceph with kolla-ansible is supported.</div><div>Just
 to make sure, rocky is not released yet in kolla/-ansible, only a 
release candidate and a proposal for release candidate 2 this week.</div><div><br></div><div>To
 dig more into your issue, what are your config? Anything out of the box
 in the servers? What steps was made to define the osd disks?</div><div><br></div><div>Regards</div></div><br><div class="gmail_quote"><div dir="ltr">El mié., 26 sept. 2018 a las 15:08, Florian Engelmann (<<a href="mailto:florian.engelmann@everyware.ch" target="_blank">florian.engelmann@everyware.ch</a>>) escribió:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Dear Eduardo,<br>
<br>
thank you for your fast response! I recognized those fixes and we are <br>
using stable/rocky from yesterday because of those commits (using the <br>
tarballs - not the git repository).<br>
<br>
I guess you are talking about:<br>
<a href="https://github.com/openstack/kolla-ansible/commit/ef6921e6d7a0922f68ffb05bd022aab7c2882473" rel="noreferrer" target="_blank">https://github.com/openstack/kolla-ansible/commit/ef6921e6d7a0922f68ffb05bd022aab7c2882473</a><br>
<br>
I saw that one in kolla as well:<br>
<a href="https://github.com/openstack/kolla/commit/60f0ea10bfdff12d847d9cb3b51ce02ffe96d6e1" rel="noreferrer" target="_blank">https://github.com/openstack/kolla/commit/60f0ea10bfdff12d847d9cb3b51ce02ffe96d6e1</a><br>
<br>
So we are using Ceph 12.2.4 right now and everything up to 24th of <br>
september in stable/rocky.<br>
<br>
Anything else we could test/change?<br>
<br>
We are at the point to deploy ceph seperated from kolla (using <br>
ceph-ansible) because we need a working environment tomorrow. Do you see <br>
a real chance get ceph via kolla-ansible up and running today?<br>
<br>
<br>
All the best,<br>
Flo<br>
<br>
<br>
<br>
<br>
Am 26.09.18 um 14:44 schrieb Eduardo Gonzalez:<br>
> Hi, what version of rocky are you using. Maybe was in the middle of a <br>
> backport which temporally broke ceph.<br>
> <br>
> Could you try latest stable/rocky branch?<br>
> <br>
> It is now working properly.<br>
> <br>
> Regards<br>
> <br>
> On Wed, Sep 26, 2018, 2:32 PM Florian Engelmann <br>
> <<a href="mailto:florian.engelmann@everyware.ch" target="_blank">florian.engelmann@everyware.ch</a> <mailto:<a href="mailto:florian.engelmann@everyware.ch" target="_blank">florian.engelmann@everyware.ch</a>>> <br>
> wrote:<br>
> <br>
>     Hi,<br>
> <br>
>     I tried to deploy Rocky in a multinode setup but ceph-osd fails with:<br>
> <br>
> <br>
>     failed: [xxxxxxxxxxx-poc2] (item=[0, {u'fs_uuid': u'', u'bs_wal_label':<br>
>     u'', u'external_journal': False, u'bs_blk_label': u'',<br>
>     u'bs_db_partition_num': u'', u'journal_device': u'', u'journal': u'',<br>
>     u'partition': u'/dev/nvme0n1', u'bs_wal_partition_num': u'',<br>
>     u'fs_label': u'', u'journal_num': 0, u'bs_wal_device': u'',<br>
>     u'partition_num': u'1', u'bs_db_label': u'', u'bs_blk_partition_num':<br>
>     u'', u'device': u'/dev/nvme0n1', u'bs_db_device': u'',<br>
>     u'partition_label': u'KOLLA_CEPH_OSD_BOOTSTRAP_BS', u'bs_blk_device':<br>
>     u''}]) => {<br>
>           "changed": true,<br>
>           "item": [<br>
>               0,<br>
>               {<br>
>                   "bs_blk_device": "",<br>
>                   "bs_blk_label": "",<br>
>                   "bs_blk_partition_num": "",<br>
>                   "bs_db_device": "",<br>
>                   "bs_db_label": "",<br>
>                   "bs_db_partition_num": "",<br>
>                   "bs_wal_device": "",<br>
>                   "bs_wal_label": "",<br>
>                   "bs_wal_partition_num": "",<br>
>                   "device": "/dev/nvme0n1",<br>
>                   "external_journal": false,<br>
>                   "fs_label": "",<br>
>                   "fs_uuid": "",<br>
>                   "journal": "",<br>
>                   "journal_device": "",<br>
>                   "journal_num": 0,<br>
>                   "partition": "/dev/nvme0n1",<br>
>                   "partition_label": "KOLLA_CEPH_OSD_BOOTSTRAP_BS",<br>
>                   "partition_num": "1"<br>
>               }<br>
>           ]<br>
>     }<br>
> <br>
>     MSG:<br>
> <br>
>     Container exited with non-zero return code 2<br>
> <br>
>     We tried to debug the error message by starting the container with a<br>
>     modified endpoint but we are stuck at the following point right now:<br>
> <br>
> <br>
>     docker run  -e "HOSTNAME=10.0.153.11" -e "JOURNAL_DEV=" -e<br>
>     "JOURNAL_PARTITION=" -e "JOURNAL_PARTITION_NUM=0" -e<br>
>     "KOLLA_BOOTSTRAP=null" -e "KOLLA_CONFIG_STRATEGY=COPY_ALWAYS" -e<br>
>     "KOLLA_SERVICE_NAME=bootstrap-osd-0" -e "OSD_BS_BLK_DEV=" -e<br>
>     "OSD_BS_BLK_LABEL=" -e "OSD_BS_BLK_PARTNUM=" -e "OSD_BS_DB_DEV=" -e<br>
>     "OSD_BS_DB_LABEL=" -e "OSD_BS_DB_PARTNUM=" -e "OSD_BS_DEV=/dev/nvme0n1"<br>
>     -e "OSD_BS_LABEL=KOLLA_CEPH_OSD_BOOTSTRAP_BS" -e "OSD_BS_PARTNUM=1" -e<br>
>     "OSD_BS_WAL_DEV=" -e "OSD_BS_WAL_LABEL=" -e "OSD_BS_WAL_PARTNUM=" -e<br>
>     "OSD_DEV=/dev/nvme0n1" -e "OSD_FILESYSTEM=xfs" -e<br>
>     "OSD_INITIAL_WEIGHT=1"<br>
>     -e "OSD_PARTITION=/dev/nvme0n1" -e "OSD_PARTITION_NUM=1" -e<br>
>     "OSD_STORETYPE=bluestore" -e "USE_EXTERNAL_JOURNAL=false"   -v<br>
>     "/etc/kolla//ceph-osd/:/var/lib/kolla/config_files/:ro" -v<br>
>     "/etc/localtime:/etc/localtime:ro" -v "/dev/:/dev/" -v<br>
>     "kolla_logs:/var/log/kolla/" -ti --privileged=true --entrypoint<br>
>     /bin/bash<br>
>     <a href="http://10.0.128.7:5000/openstack/openstack-kolla-cfg/ubuntu-source-ceph-osd:7.0.0.3" rel="noreferrer" target="_blank">10.0.128.7:5000/openstack/openstack-kolla-cfg/ubuntu-source-ceph-osd:7.0.0.3</a><br>
>     <<a href="http://10.0.128.7:5000/openstack/openstack-kolla-cfg/ubuntu-source-ceph-osd:7.0.0.3" rel="noreferrer" target="_blank">http://10.0.128.7:5000/openstack/openstack-kolla-cfg/ubuntu-source-ceph-osd:7.0.0.3</a>><br>
> <br>
> <br>
> <br>
>     cat /var/lib/kolla/config_files/ceph.client.admin.keyring ><br>
>     /etc/ceph/ceph.client.admin.keyring<br>
> <br>
> <br>
>     cat /var/lib/kolla/config_files/ceph.conf > /etc/ceph/ceph.conf<br>
> <br>
> <br>
>     (bootstrap-osd-0)[root@985e2dee22bc /]# /usr/bin/ceph-osd -d<br>
>     --public-addr 10.0.153.11 --cluster-addr 10.0.153.11<br>
>     usage: ceph-osd -i <ID> [flags]<br>
>         --osd-data PATH data directory<br>
>         --osd-journal PATH<br>
>                           journal file or block device<br>
>         --mkfs            create a [new] data directory<br>
>         --mkkey           generate a new secret key. This is normally<br>
>     used in<br>
>     combination with --mkfs<br>
>         --convert-filestore<br>
>                           run any pending upgrade operations<br>
>         --flush-journal   flush all data out of journal<br>
>         --mkjournal       initialize a new journal<br>
>         --check-wants-journal<br>
>                           check whether a journal is desired<br>
>         --check-allows-journal<br>
>                           check whether a journal is allowed<br>
>         --check-needs-journal<br>
>                           check whether a journal is required<br>
>         --debug_osd <N>   set debug level (e.g. 10)<br>
>         --get-device-fsid PATH<br>
>                           get OSD fsid for the given block device<br>
> <br>
>         --conf/-c FILE    read configuration from the given<br>
>     configuration file<br>
>         --id/-i ID        set ID portion of my name<br>
>         --name/-n <a href="http://TYPE.ID" rel="noreferrer" target="_blank">TYPE.ID</a> <<a href="http://TYPE.ID" rel="noreferrer" target="_blank">http://TYPE.ID</a>> set name<br>
>         --cluster NAME    set cluster name (default: ceph)<br>
>         --setuser USER    set uid to user or uid (and gid to user's gid)<br>
>         --setgroup GROUP  set gid to group or gid<br>
>         --version         show version and quit<br>
> <br>
>         -d                run in foreground, log to stderr.<br>
>         -f                run in foreground, log to usual location.<br>
>         --debug_ms N      set message debug level (e.g. 1)<br>
>     2018-09-26 12:28:07.801066 7fbda64b4e40  0 ceph version 12.2.4<br>
>     (52085d5249a80c5f5121a76d6288429f35e4e77b) luminous (stable), process<br>
>     (unknown), pid 46<br>
>     2018-09-26 12:28:07.801078 7fbda64b4e40 -1 must specify '-i #' where #<br>
>     is the osd number<br>
> <br>
> <br>
>     But it looks like "-i" is not set anywere?<br>
> <br>
>     grep command<br>
>     /opt/stack/kolla-ansible/ansible/roles/ceph/templates/ceph-osd.json.j2<br>
>     "command": "/usr/bin/ceph-osd -f --public-addr {{<br>
>     hostvars[inventory_hostname]['ansible_' +<br>
>     storage_interface]['ipv4']['address'] }} --cluster-addr {{<br>
>     hostvars[inventory_hostname]['ansible_' +<br>
>     cluster_interface]['ipv4']['address'] }}",<br>
> <br>
>     What's wrong with our setup?<br>
> <br>
>     All the best,<br>
>     Flo<br>
> <br>
> <br>
>     -- <br>
> <br>
>     EveryWare AG<br>
>     Florian Engelmann<br>
>     Systems Engineer<br>
>     Zurlindenstrasse 52a<br>
>     CH-8003 Zürich<br>
> <br>
>     tel: +41 44 466 60 00<br>
>     fax: +41 44 466 60 10<br>
>     mail: mailto:<a href="mailto:florian.engelmann@everyware.ch" target="_blank">florian.engelmann@everyware.ch</a><br>
>     <mailto:<a href="mailto:florian.engelmann@everyware.ch" target="_blank">florian.engelmann@everyware.ch</a>><br>
>     web: <a href="http://www.everyware.ch" rel="noreferrer" target="_blank">http://www.everyware.ch</a><br>
>     __________________________________________________________________________<br>
>     OpenStack Development Mailing List (not for usage questions)<br>
>     Unsubscribe:<br>
>     <a href="http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe" rel="noreferrer" target="_blank">OpenStack-dev-request@lists.openstack.org?subject:unsubscribe</a><br>
>     <<a href="http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe" rel="noreferrer" target="_blank">http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe</a>><br>
>     <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" rel="noreferrer" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
> <br>
> <br>
> __________________________________________________________________________<br>
> OpenStack Development Mailing List (not for usage questions)<br>
> Unsubscribe: <a href="http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe" rel="noreferrer" target="_blank">OpenStack-dev-request@lists.openstack.org?subject:unsubscribe</a><br>
> <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" rel="noreferrer" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
> <br>
<br>
-- <br>
<br>
EveryWare AG<br>
Florian Engelmann<br>
Systems Engineer<br>
Zurlindenstrasse 52a<br>
CH-8003 Zürich<br>
<br>
tel: +41 44 466 60 00<br>
fax: +41 44 466 60 10<br>
mail: mailto:<a href="mailto:florian.engelmann@everyware.ch" target="_blank">florian.engelmann@everyware.ch</a><br>
web: <a href="http://www.everyware.ch" rel="noreferrer" target="_blank">http://www.everyware.ch</a><br>
</blockquote></div>
</blockquote></div></div>