[Openstack] Integration with OpenStack
Alex Glikson
GLIKSON at il.ibm.com
Sat May 31 09:40:41 UTC 2014
Hi Avi,
This is a very interesting use-case. We have been experimenting internally
with similar ideas (dynamic management of virtualized and bare-metal
resources).
In a nutshell, you can use Heat templates to provision the different
environments. For bare-metal, you can configure Nova to surface bare-metal
flavors (with nova-baremetal driver underneath), mapped to a dedicated
host aggregate comprising bare-metal machines. You can construct
bare-metal images with diskimage-buidler from the TripleO project.
If you don't have non-trivial networking requirements (e.g., can work with
a single flat network), things might work pretty much out of the box.
Things would get a bit more complicated if you want to dynamically
re-purpose physical nodes between virtualized and bare-metal workloads.
Depending on the nature of your workloads (e.g., your ability to predict
the desired size of each pool), you may consider using something like Heat
auto-scaling to drive the outer control loop (but it might require some
code changes to work properly in this case). Alternatively, this logic can
be external, invoking Heat for provisioning (you can also use Heat +
nova-baremetal + TripleO tools to provision compute nodes themselves).
There are many nuances to make it work, but with certain simplifying
assumptions it seems feasible to come up with a 'native' OpenStack
solution, with minimal amount of custom code.
Regards,
Alex
P.S. we are going to present some related work at the OpenStack Israel
event on Monday - we can follow-up face-to-face if you plan to be there
too.
From: Avi Tal <avi3tal at gmail.com>
To: openstack at lists.openstack.org,
Date: 31/05/2014 11:34 AM
Subject: [Openstack] Integration with OpenStack
Hi all,
I am designing a "Dynamic Resource Allocation" for my company lab
resources. The focal point of this solution should be OpenStack.
Background:
The testing and dev environments are built out of multiple nodes. Servers,
clients. Some could be virtual but must support also bare-metal.
The goal is to manage the resource pool (both virtual and physical)
dynamically. Let the automated test request for specific environment by
posting the environment details and release it back to pool at the end of
the test.
Example:
Request:
client:
count: 2
type: virtual
os: fedora 20
memory: 2GB
cpu: 4
disk: >200G
packages: ['puppet', 'fio', 'python-2.7']
client:
count: 4
type: physical
os: centos-6.5
memory: 2GB
cpu: 4
disk: >100G flash
packages: ['puppet', 'fio', 'python-2.7']
server:
count: 2
type: physical
os: centos-6.5
build: 'b10'
Response:
clients:
client1.domain.com:
address: 1.1.1.1
user: root
password: 123456
os: fedora-20
client2.domain.com:
address: 2.2.2.2
user: root
password: 123456
os: fedora-20
client3.domain.com:
address: 3.3.3.3
user: root
password: 123456
os: centos-6.5
...
servers:
server1.domain.com:
address: 10.10.10.10
user: root
password: 123456
server2.domain.com:
address: 1.1.1.1
user: root
password: 123456
I could think of two solutions:
1. develop my own layer and use OpenStack just for the provisioning layer
using the API and Foreman for bare-metal, puppet interface, lab services
configuration (dns, dhcp, pxe etc') and searching engine via facts across
entire resources (virtual and physical).
2. develop an OpenStack component that integrate with keystone, nova,
horizon and implement my own business layer.
My questions:
1. Is there any way of actually implement my second solution? any
documentation for writing new OpenStack component?
2. I think that my scenario is common and this solution could be helping
many other companies. Is there any OpenStack project that solve it?
3. How can i offer it to OpenStack as a new component?
I would be thankful for any help and comments
Thanks
--
Avi Tal_______________________________________________
Mailing list:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack at lists.openstack.org
Unsubscribe :
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20140531/a73e85cb/attachment.html>
More information about the Openstack
mailing list