[openstack-dev] [Ironic][Bifrost] Manually enrolling a node

Mark Goddard mark at stackhpc.com
Mon Mar 5 10:25:16 UTC 2018


I think the issue is that the default behaviour of the client was changed
in the queens (2.0.0). Previously the default microversion was 1.9, but now
it is latest [1]. Looks like the deploy guide needs some more conditional
wording. I've raised a bug [2], feel free to comment if I've missed
something.
Mark

[1] https://docs.openstack.org/releasenotes/python-ironicclient/queens.html
[2] https://bugs.launchpad.net/ironic/+bug/1753435

On 4 March 2018 at 23:32, Michael Still <mikal at stillhq.com> wrote:

> I think one might be a bug in the deploy guide then. It states:
>
> "In order for nodes to be available for deploying workloads on them, nodes
> must be in the available provision state. To do this, nodes created with
> API version 1.11 and above must be moved from the enroll state to the
> manageable state and then to the available state. This section can be
> safely skipped, if API version 1.10 or earlier is used (which is the case
> by default)."
>
> Whereas I definitely had to move the node to the manage provision state
> manually to get the node to be managed. For reference, this is the set of
> command lines I ended up using to manually enroll a node (in case its of
> use to someone else):
>
> ironic node-create -d agent_ipmitool \
>           -i ipmi_username=root \
>           -i ipmi_password=superuser \
>           -i ipmi_address=192.168.50.31 \
>           -i deploy_kernel=http://192.168.50.209:8080/ipa.vmlinuz \
>           -i deploy_ramdisk=http://192.168.50.209:8080/ipa.initramfs \
>           -p cpus=16 \
>           -p memory_mb=12288 \
>           -p local_gb=750 \
>           -p cpu_arch=x86_64 \
>           -p capabilities=boot_option:local \
>           -n lab8
> ironic port-create -n ${UUID}  -a ${DHCP_MAC}
> ironic node-validate lab8
> ironic --ironic-api-version 1.11 node-set-provision-state lab8 manage
>
> Michael
>
> On Mon, Mar 5, 2018 at 9:05 AM, Mark Goddard <mark at stackhpc.com> wrote:
>
>> On the enroll state, you can move it to available via manageable by
>> setting the provision state to manage, then provide.
>>
>> Try an ironic node-validate to diagnose the issue, and make sure the ipmi
>> credentials given can be used to query the nodes power state using ipmitool.
>>
>> Mark
>>
>> On 4 Mar 2018 9:42 p.m., "Mark Goddard" <mark at stackhpc.com> wrote:
>>
>>> Try setting the ironic_log_dir variable to /var/log/ironic, or setting
>>> [default] log_dir to the same in ironic.conf.
>>>
>>> I'm surprised it's not logging to a file by default.
>>>
>>> Mark
>>>
>>> On 4 Mar 2018 8:33 p.m., "Michael Still" <mikal at stillhq.com> wrote:
>>>
>>>> Ok, so I applied your patch and redeployed. I now get a list of drivers
>>>> in "ironic driver-list", and I can now enroll a node.
>>>>
>>>> Interestingly, the node sits in the "enroll" provisioning state for
>>>> ages and doesn't appear to ever get a meaningful power state (ever being
>>>> after a five minute wait). There are still no logs in /var/log/ironic, and
>>>> grepping for the node's uuid in /var/log/syslog returns zero log items.
>>>>
>>>> Your thoughts?
>>>>
>>>> Michael
>>>>
>>>>
>>>>
>>>> On Mon, Mar 5, 2018 at 7:04 AM, Mark Goddard <mark at stackhpc.com> wrote:
>>>>
>>>>> The ILO hardware type was also not loading because the required
>>>>> management and power interfaces were not enabled. The patch should address
>>>>> that but please let us know if there are further issues.
>>>>> Mark
>>>>>
>>>>>
>>>>> On 4 Mar 2018 7:59 p.m., "Michael Still" <mikal at stillhq.com> wrote:
>>>>>
>>>>> Replying to a single email because I am lazier than you.
>>>>>
>>>>> I would have included logs, except /var/log/ironic on the bifrost
>>>>> machine is empty. There are entries in syslog, but nothing that seems
>>>>> related (its all periodic task kind of stuff).
>>>>>
>>>>> However, Mark is right. I had an /etc/ironic/ironic.conf with "ucs" as
>>>>> a hardware type. I've removed ucs entirely from that list and restarted
>>>>> conductor, but that didn't help. I suspect https://review.opensta
>>>>> ck.org/#/c/549318/3 is more subtle than that. I will patch in that
>>>>> change and see if I can get things to work after a redeploy.
>>>>>
>>>>> Michael
>>>>>
>>>>>
>>>>>
>>>>> On Mon, Mar 5, 2018 at 5:45 AM, Mark Goddard <mark at stackhpc.com>
>>>>> wrote:
>>>>>
>>>>>> Hi Michael,
>>>>>>
>>>>>> If you're using the latest release of biifrost I suspect you're
>>>>>> hitting https://bugs.launchpad.net/bifrost/+bug/1752975. I've
>>>>>> submitted anfox for review.
>>>>>>
>>>>>> For a workaround, modify /etc/ironic/ironic.conf, and set
>>>>>> enabled_hardware_types=ipmi.
>>>>>>
>>>>>> Cheers,
>>>>>> Mark
>>>>>>
>>>>>> On 4 Mar 2018 5:50 p.m., "Julia Kreger" <juliaashleykreger at gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>>> > No valid host was found. Reason: No conductor service registered
>>>>>>> which
>>>>>>> > supports driver agent_ipmitool. (HTTP 400)
>>>>>>> >
>>>>>>> > I can't see anything helpful in the logs. What driver should I be
>>>>>>> using for
>>>>>>> > bifrost? agent_ipmitool seems to be enabled in ironic.conf.
>>>>>>>
>>>>>>> Weird, I'm wondering what the error is in the conductor log. You can
>>>>>>> try using "ipmi" for the hardware type that replaces
>>>>>>> agent_ipmitool/pxe_ipmitool.
>>>>>>>
>>>>>>> -Julia
>>>>>>>
>>>>>>> ____________________________________________________________
>>>>>>> ______________
>>>>>>> OpenStack Development Mailing List (not for usage questions)
>>>>>>> Unsubscribe: OpenStack-dev-request at lists.op
>>>>>>> enstack.org?subject:unsubscribe
>>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>>>
>>>>>>
>>>>>> ____________________________________________________________
>>>>>> ______________
>>>>>> OpenStack Development Mailing List (not for usage questions)
>>>>>> Unsubscribe: OpenStack-dev-request at lists.op
>>>>>> enstack.org?subject:unsubscribe
>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>>
>>>>>>
>>>>>
>>>>> ____________________________________________________________
>>>>> ______________
>>>>> OpenStack Development Mailing List (not for usage questions)
>>>>> Unsubscribe: OpenStack-dev-request at lists.op
>>>>> enstack.org?subject:unsubscribe
>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>
>>>>>
>>>>>
>>>>> ____________________________________________________________
>>>>> ______________
>>>>> OpenStack Development Mailing List (not for usage questions)
>>>>> Unsubscribe: OpenStack-dev-request at lists.op
>>>>> enstack.org?subject:unsubscribe
>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>
>>>>>
>>>>
>>>> ____________________________________________________________
>>>> ______________
>>>> OpenStack Development Mailing List (not for usage questions)
>>>> Unsubscribe: OpenStack-dev-request at lists.op
>>>> enstack.org?subject:unsubscribe
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>
>>>>
>> ____________________________________________________________
>> ______________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20180305/6236e941/attachment.html>


More information about the OpenStack-dev mailing list