[openstack-dev] Technical vision for OpenStack to be ubiquitous cloud service

joehuang joehuang at huawei.com
Tue Apr 28 09:21:17 UTC 2015


Hello, all,

This is Chaoyi Huang (Joe Huang). I am not TC, nor TC candidate, but I have a technical vision for OpenStack to be ubiquitous cloud service. I would like to apply one cross project session in the Vancouver design summit to give a lightning talk for the vision, TC/TC candidate/anyone's comments are welcome.

A tenant doesn't care how large scale the cloud is, or it's done by federation or multi-region, or hybrid cloud; just need if I want resources, then the cloud should provide me the resources. Only cloud admin cares about the scalability.
A tenant doesn't care others who are also using the cloud. Only cloud admin cares.
A tenant wants whenever the business enters a new market, the cloud will be there to provide resources to support the business presence.
And if one application needs to be deployed in the cloud, tenant doesn't want to put all eggs in one basket, hope to deploy the app in multiple places for fail-safe. 
Tenant only cares about his own bill, his own monitoring, his own resources healthy or not, wherever his resources distributed to. 
Tenant also cares about how to manage his IP address space not overlapping in the cloud, can the cloud provide networking support for various applications inter-connection need, wherever the applications will be distributed to. 
...

Now, how will OpenStack enable this? How to make OpenStack being ubiquitous cloud service to meet demands for countless tenants?

In the real world, there is already technology system providing ubiquitous service: telecom system serves almost 7 billion users, internet too on billions level. It's impossible to build ubiquitous service without addressing and routing. OpenStack can manage tenant resources in one region. Bionics helps building aircraft, what will happen if we borrow ideas from telecom system and internet to cloud world?

Let's start a journey to combine these ideas of addressing, routing and tenant resources management to provide ubiquitous OpenStack cloud service. (Here we focus on the IaaS service like Nova, Cinder, Neutron, Ceilometer, Glance, KeyStone, Horizon ).

1) When a new tenant account is open, the cloud provider dynamically creates/allocates one new OpenStack instance to serve exclusively for this tenant (Can share with other tenants, but now let's assume it's exclusively owned by the tenant first). Let's call this OpenStack instance as tenant OpenStack, including services of Nova, Cinder, Neutron, Ceilometer, Glance,Horizon ), and assign one virtual region for this tenant. Cloud provider can geographically create/allocate tenant OpenStack based on tenant's requirement.

2) When tenant user access the cloud, will be redirected to the tenant OpenStack. This is the first level addressing and routing. Because each tenant can have his own OpenStack service, so the load could be distributed fully, and no central point at all. The tenant OpenStack itself needs to introduce resource level addressing and routing, and act as cloud broker role.

3) Tenant doesn't care the knowledge whether the cloud is built by hybrid / federation / multi-region / multi OpenStack instances or not, only think the cloud has unlimited resources. That means there is a huge amount of OpenStack instances pool behind the cloud to provide resources like virtual machine, volume, network..., the tenant OpenStack should be able to act as cloud broker and route resource request to desired OpenStack instances. 

4) At the beginning, the tenant OpenStack will be configured to route resource request to specified OpenStack instances, for example, OpenStack instance 1 and OpenStack instance 2. Another tenant may be configured to OpenStack instance 2 and OpenStack instance x, or instance x and instance y. OpenStack instances layer could be an ocean of the cloud resources pool, no matter these OpenStack instances are in federated, or hybrid, or multi-region mode. But each tenant will be only configured to allow apply resources in few OpenStack instances, e.g. the relationship between tenant OpenStack layer and OpenStack instances layer is M:N.

5) When tenant wants to create virtual machine or volume, we need addressing and routing mechanism to find one proper OpenStack instance. Here the availability zone (AZ in short) is used for OpenStack instance addressing and routing. For example, OpenStack instance 1 is configured to use AZ1, OpenStack instance 1 as AZ2. Through this mechanism, no need to change OpenStack API, just if create a VM or volume in AZ1, then the request can be routed to OpenStack instance 1. If create a VM or volume in AZ2, then the request can be routed to OpenStack instance 2.

6) Image also needs a mechanism for addressing and routing. Image location is used accordingly. Image location can be http address, so just register image link in OpenStack instances 1 or 2 as one image location in tenant's glance image. So that when the image is used in tenant glance, can be properly routed to the image used in OpenStack instances 1 or 2. Validation for the image location and AZ addressing and routing could be done in the tenant OpenStack.

7) Now the tenant can create and find its all virtual machine and volume through his own tenant OpenStack via addressing and routing. And very important here is that the tenant seems to have a virtual region served with one OpenStack and its API. The tenant's quota control can be configured in the tenant OpenStack by cloud provider, for this purpose and for better VM/Volume query experience; we can cache the VM/volume information in the tenant's OpenStack database, it would be very very small ( multi-tenants can share one tenant OpenStack to reduce resource waste, but keep it as thin as possible for easy backup/disaster recovery). This further makes the tenant OpenStack like a real OpenStack for the tenant, but no virtual machine or volume running inside. The real virtual machine and volume resides in OpenStack instance 1 or instance 2.

8) The virtual machine should be inter-connected. The neutron in the tenant OpenStack can make it happen. The tenant can create network/subnet in tenant Neutron, and attach the virtual machine to the network/subnet. When the virtual machine is created, it'll will be booted in some AZ, therefore the network/subnet will accordingly be created in that AZ (for example, if in AZ1, then create network/subnet in OpenStack instance 1), that means the virtual network will present where the VM resides. This is the network addressing and routing schema.

9) Except networking inside one AZ, the tenant's virtual machines from different AZs should also be able to be connected, L3 or L2. For tenant level L3 networking cross OpenStack instances, router can be created in each OpenStack instances where there are virtual machines for this tenant, and these distributed routers of this tenant can be inter-connected through provider L2 network cross OpenStack. The provider L2 network for inter-connection can be created by cloud provider Neutron for the tenant on demand. For the tenant, he only needs to create a virtual router in the tenant Neutron, for router instance creation and inter-connection could be done by the tenant Neutron and it's mechanism driver and plug-in.Cross OpenStack instances L2 network for the tenant can also be created, the L2 network can be used for heart-beat or data synchronization for high availability applications running on multiple OpenStack instances, for example, the active virtual machine running in AZ1, standby in AZ2.

10) Tenant also has one point contact for his metering data in the tenant Ceilometer, or build cloud watch or auto scaling based on the tenant Ceilometer. The tenant ceilometers can collect data from different Ceilometer instances in different AZs. The load is small for the tenant Ceilometer only serves for the tenant.

11) The tenant OpenStack only store data for the tenant, the size will be very very small. Therefore it's easy for backup, disaster recovery or even build tenant OpenStack service in geographically cluster manner.

12) As the tenant business expansion, the tenant may even ask for resource creation in new area, just configure the tenant OpenStack being able to route the request to new regarding OpenStack instances. For example, from AZ1,AZ2 to AZ1,AZ2,AZ3...

13) If the cloud provider has no enough OpenStack instances pool, federated OpenStack instances can be part of the pool, and tenant won't be aware of that, the experience is kept same. 

14) In fact, the tenant OpenStack addressing and routing mechanism just treat OpenStack instances as its backend, we have developed a lot of driver/agent for Nova/Neutron/Cinder/...to integrate different backend, it's no harm to add one new driver/agent, the only special is that the backend is another similar service instance.

15) We can also develop driver/agent for AWS and Azure in OpenStack, so that the tenant OpenStack can also address and route to integrate resources distributed in public cloud. 

A fully distributed tenant OpenStack layer, an ocean of OpenStack instances layer, through combination of addressing and routing, we can use OpenStack to build a world of ubiquitous cloud service.

Is this vision feasible? The answer is yes, you can find PoC in reference [1]. The PoC is just proof of concept indeed, if the vision is the right direction, the community can re-design and re-write the source code to make the vision come to real world.

[1] https://wiki.openstack.org/wiki/OpenStack_cascading_solution

Best Regards
Chaoyi Huang ( Joe Huang )



More information about the OpenStack-dev mailing list