[Openstack] Integration with OpenStack

Alex Glikson GLIKSON at il.ibm.com
Sat May 31 09:40:41 UTC 2014

Hi Avi,

This is a very interesting use-case. We have been experimenting internally 
with similar ideas (dynamic management of virtualized and bare-metal 
In a nutshell, you can use Heat templates to provision the different 
environments. For bare-metal, you can configure Nova to surface bare-metal 
flavors (with nova-baremetal driver underneath), mapped to a dedicated 
host aggregate comprising bare-metal machines. You can construct 
bare-metal images with diskimage-buidler from the TripleO project.
If you don't have non-trivial networking requirements (e.g., can work with 
a single flat network), things might work pretty much out of the box.
Things would get a bit more complicated if you want to dynamically 
re-purpose physical nodes between virtualized and bare-metal workloads. 
Depending on the nature of your workloads (e.g., your ability to predict 
the desired size of each pool), you may consider using something like Heat 
auto-scaling to drive the outer control loop (but it might require some 
code changes to work properly in this case). Alternatively, this logic can 
be external, invoking Heat for provisioning (you can also use Heat + 
nova-baremetal + TripleO tools to provision compute nodes themselves).
There are many nuances to make it work, but with certain simplifying 
assumptions it seems feasible to come up with a 'native' OpenStack 
solution, with minimal amount of custom code.


P.S. we are going to present some related work at the OpenStack Israel 
event on Monday - we can follow-up face-to-face if you plan to be there 

From:   Avi Tal <avi3tal at gmail.com>
To:     openstack at lists.openstack.org, 
Date:   31/05/2014 11:34 AM
Subject:        [Openstack] Integration with OpenStack

Hi all,
I am designing a "Dynamic Resource Allocation" for my company lab 
resources. The focal point of this solution should be OpenStack.

The testing and dev environments are built out of multiple nodes. Servers, 
clients. Some could be virtual but must support also bare-metal.
The goal is to manage the resource pool (both virtual and physical) 
dynamically. Let the automated test request for specific environment by 
posting the environment details and release it back to pool at the end of 
the test.


    count: 2
    type: virtual
    os: fedora 20
    memory: 2GB
    cpu: 4
    disk: >200G
    packages: ['puppet', 'fio', 'python-2.7']
    count: 4
    type: physical
    os: centos-6.5
    memory: 2GB
    cpu: 4
    disk: >100G flash
    packages: ['puppet', 'fio', 'python-2.7']
    count: 2
    type: physical
    os: centos-6.5
    build: 'b10'

        user: root
        password: 123456
        os: fedora-20
        user: root
        password: 123456
        os: fedora-20
        user: root
        password: 123456
        os: centos-6.5
        user: root
        password: 123456
        user: root
        password: 123456

I could think of two solutions:
1. develop my own layer and use OpenStack just for the provisioning layer 
using the API and Foreman for bare-metal, puppet interface, lab services 
configuration (dns, dhcp, pxe etc') and searching engine via facts across 
entire resources (virtual and physical).
2. develop an OpenStack component that integrate with keystone, nova, 
horizon and implement my own business layer.

My questions:
1. Is there any way of actually implement my second solution? any 
documentation for writing new OpenStack component?
2. I think that my scenario is common and this solution could be helping 
many other companies. Is there any OpenStack project that solve it?
3. How can i offer it to OpenStack as a new component?

I would be thankful for any help and comments


Avi Tal_______________________________________________
Mailing list: 
Post to     : openstack at lists.openstack.org
Unsubscribe : 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20140531/a73e85cb/attachment.html>

More information about the Openstack mailing list