[Openstack] Integration with OpenStack

Avi Tal avi3tal at gmail.com
Sat May 31 20:58:10 UTC 2014


I wouldn't like to use foreman for the entire solution because I believe
OpenStack is the future and I prefer treating OpenStack as the focal point
and foreman as bare metal workaround until TripleO or Ironic will be out
for production.
Foreman will also be used for the post install requirements. For example, I
just want to reuse the existing client in different setup and update some
software version.

Using only Foreman's API will make it harder to fully integrate with
OpenStack lifecycle solution in the future.
Another big advantage of OpenStack is it's python SDK.

As for the Foreman's discovery process you have mentioned, Do i have to
delete them from Foreman or can i just query for facts and send them for
new PXE install on demand (by mac or something)?


On Sat, May 31, 2014 at 9:03 PM, Matt Jarvis <matt.jarvis at datacentred.co.uk>
wrote:

> Foreman supports both bare metal and Openstack provisioning, so you could
> use the Foreman API to achieve all of this. Off the top of my head, for
> bare metal you'd use the foreman discovery plugin so that non-configured
> hosts are available as discovered hosts, get a list of hosts available in
> discovered state, provision and use them for tests, then once you've
> finished with them, delete them from Foreman and reboot so that they come
> back into the discovered hosts pool. For Openstack hosts you can do the
> whole lifecycle management piece through the Foreman API.
>
>
> On 31 May 2014 14:42, Avi Tal <avi3tal at gmail.com> wrote:
>
>> Hi Alex,
>> First of all, Thanks for the excellent answer. Indeed I'll be
>> participating the event in Israel. It will be cool to meet face-to-face and
>> discuss these scenarios.
>>
>> Thanks
>>
>>
>> On Sat, May 31, 2014 at 12:40 PM, Alex Glikson <GLIKSON at il.ibm.com>
>> wrote:
>>
>>> Hi Avi,
>>>
>>> This is a very interesting use-case. We have been experimenting
>>> internally with similar ideas (dynamic management of virtualized and
>>> bare-metal resources).
>>> In a nutshell, you can use Heat templates to provision the different
>>> environments. For bare-metal, you can configure Nova to surface bare-metal
>>> flavors (with nova-baremetal driver underneath), mapped to a dedicated host
>>> aggregate comprising bare-metal machines. You can construct bare-metal
>>> images with diskimage-buidler from the TripleO project.
>>> If you don't have non-trivial networking requirements (e.g., can work
>>> with a single flat network), things might work pretty much out of the box.
>>> Things would get a bit more complicated if you want to dynamically
>>> re-purpose physical nodes between virtualized and bare-metal workloads.
>>> Depending on the nature of your workloads (e.g., your ability to predict
>>> the desired size of each pool), you may consider using something like Heat
>>> auto-scaling to drive the outer control loop (but it might require some
>>> code changes to work properly in this case). Alternatively, this logic can
>>> be external, invoking Heat for provisioning (you can also use Heat +
>>> nova-baremetal + TripleO tools to provision compute nodes themselves).
>>> There are many nuances to make it work, but with certain simplifying
>>> assumptions it seems feasible to come up with a 'native' OpenStack
>>> solution, with minimal amount of custom code.
>>>
>>> Regards,
>>> Alex
>>>
>>> P.S. we are going to present some related work at the OpenStack Israel
>>> event on Monday - we can follow-up face-to-face if you plan to be there too.
>>>
>>>
>>>
>>>
>>> From:        Avi Tal <avi3tal at gmail.com>
>>> To:        openstack at lists.openstack.org,
>>> Date:        31/05/2014 11:34 AM
>>> Subject:        [Openstack] Integration with OpenStack
>>> ------------------------------
>>>
>>>
>>>
>>> Hi all,
>>> I am designing a "Dynamic Resource Allocation" for my company lab
>>> resources. The focal point of this solution should be OpenStack.
>>>
>>> *Background:*
>>> The testing and dev environments are built out of multiple nodes.
>>> Servers, clients. Some could be virtual but must support also bare-metal.
>>> The goal is to manage the resource pool (both virtual and physical)
>>> dynamically. Let the automated test request for specific environment by
>>> posting the environment details and release it back to pool at the end of
>>> the test.
>>>
>>> *Example:*
>>>
>>> *Request:*
>>> client:
>>>     count: 2
>>>     type: virtual
>>>     os: fedora 20
>>>     memory: 2GB
>>>     cpu: 4
>>>     disk: >200G
>>>     packages: ['puppet', 'fio', 'python-2.7']
>>> client:
>>>     count: 4
>>>     type: physical
>>>     os: centos-6.5
>>>     memory: 2GB
>>>     cpu: 4
>>>     disk: >100G flash
>>>     packages: ['puppet', 'fio', 'python-2.7']
>>> server:
>>>     count: 2
>>>     type: physical
>>>     os: centos-6.5
>>>     build: 'b10'
>>>
>>> *Response:*
>>> clients:
>>>     *client1.domain.com* <http://client1.domain.com/>:
>>>         address: 1.1.1.1
>>>         user: root
>>>         password: 123456
>>>         os: fedora-20
>>>     *client2.domain.com* <http://client2.domain.com/>:
>>>         address: 2.2.2.2
>>>         user: root
>>>         password: 123456
>>>         os: fedora-20
>>>     *client3.domain.com* <http://client3.domain.com/>:
>>>         address: 3.3.3.3
>>>         user: root
>>>         password: 123456
>>>         os: centos-6.5
>>> ...
>>> servers:
>>>     *server1.domain.com* <http://server1.domain.com/>:
>>>         address: 10.10.10.10
>>>         user: root
>>>         password: 123456
>>>     *server2.domain.com* <http://server2.domain.com/>:
>>>         address: 1.1.1.1
>>>         user: root
>>>         password: 123456
>>>
>>>
>>>
>>>
>>> *I could think of two solutions:*
>>> 1. develop my own layer and use OpenStack just for the provisioning
>>> layer using the API and Foreman for bare-metal, puppet interface, lab
>>> services configuration (dns, dhcp, pxe etc') and searching engine via facts
>>> across entire resources (virtual and physical).
>>> 2. develop an OpenStack component that integrate with keystone, nova,
>>> horizon and implement my own business layer.
>>>
>>>
>>> *My questions:*
>>> 1. Is there any way of actually implement my second solution? any
>>> documentation for writing new OpenStack component?
>>> 2. I think that my scenario is common and this solution could be helping
>>> many other companies. Is there any OpenStack project that solve it?
>>> 3. How can i offer it to OpenStack as a new component?
>>>
>>> I would be thankful for any help and comments
>>>
>>> Thanks
>>>
>>>
>>>
>>>
>>>
>>> --
>>> *Avi Tal*_______________________________________________
>>> Mailing list:
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>> Post to     : openstack at lists.openstack.org
>>> Unsubscribe :
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>>
>>>
>>
>>
>> --
>> *Avi Tal*
>>
>> _______________________________________________
>> Mailing list:
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> Post to     : openstack at lists.openstack.org
>> Unsubscribe :
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>
>>
>
>
> --
> Matt Jarvis
> Head of Cloud Computing
> DataCentred
> Office: (+44)0161 8703985
> Mobile: (+44)07983 725372
> Email: matt.jarvis at datacentred.co.uk
> Website: http://www.datacentred.co.uk
>
> DataCentred Limited registered in England and Wales no. 05611763




-- 
*Avi Tal*
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20140531/4ee6b0a8/attachment.html>


More information about the Openstack mailing list