[openstack-dev] [Senlin]Support more complicated scaling scenario
jun.xu.sz at 139.com
jun.xu.sz at 139.com
Mon Nov 23 01:18:35 UTC 2015
Thanks yanyan!
Xu Jun is a contributor from CMCC. He asked a very interesting question about cluster scaling support in Senlin. To make the discussion more thorough, I just post the question and my answer here.
The question from Jun is as following:
For an action, senlin will check all according polices, like if a cluster attach two scaling-in polices,
the two scaling-in polices will be checked when doing a scaling-action on this cluster. This is not same as OS::Heat::ScalingPolicy in heat?
How should I use senlin for following cases?
1. 15% < cpu_util < 30%, scaling_in 1 instance
2. cpu_util < 15%, scaling_in 2 instances
This is a very interesting question and you're right about the difference between Senlin ScalingPolicy and OS::Heat::ScalingPolicy. In Heat, OS::Heat::ScalingPolicy is actually not just a policy. It is a combination of a webhook and a rule about how ASG respond to the webhook triggering(resource signal). So you can define two different OS::Heat::ScalingPolicy instances to make them deal with two cases you described respectively.
But in Senlin, ScalingPolicy is a REAL policy, it only describes how a Senlin cluster react to an action triggered by Senlin webhook which is defined separately. The problem is when an cluster action e.g. CLUSTER_SCALE_IN is triggered, all policies attached to it will be checked in sequence based on policies priority definition. So if you create two Senlin ScalingPolicy and attach them to the same cluster, only one of them will take effect actually.
# 1. But in policy_check function, all the policies will be checked in priority-based order for a CLUSTER_SCALING_IN action if the cluster attached with SCALING multiple policies.
is this a bug? or what is the significance of prority).
2. if a cluster attached a scaling policy with event = CLUSTER_SCALE_IN, when a CLUSTER_SCALING_OUT action is triggered, the policy also will be checked, is this reasonable?
Currently, you can use the following way to support your use case in Senlin:
1. Define two Senlin webhooks which target on the CLUSTER_SCALE_OUT action of the cluster and specify the 'param' as {'count': 1} for webhook1 and {'count': 2 } for webhook2;
1. Define two ceilometer/aodh alarms with the first one matching case1 and second one matching case2. Then define webhook1 url as alarm1's alarm-action and webhook2 url as alarm2's alarm-action.
#
Your suggestion has a problem when I want different cooldown for each ceilometer/aodh alarms, for following cases, how should I do?
1. 15% < cpu_util < 30%, scaling_in 1 instance with 300s cooldown time
2. cpu_util < 15%, scaling_in 2 instances with 600s cooldown time
For a senlin webhook, could we assign a policy which will be checked ?
Then each time alarm1 is triggered, cluster will be scaled out with count 1 which means one new node will be created and added to cluster. When alarm2 is triggered, cluster will be scaled out with count 2 that two new nodes will be created and added to cluster.
The question you asked is really interesting and we did consider to support this kind of requirement using a 'complex' ScalingPolicy which defined both trigger(alarm), webhook and some rules for scaling. But after some discussion, we felt that maybe we should let some high level service/enduser to define this kind of 'POLICY' since it's more like a workflow definition rather than a description of the rule cluster scaling. So currently, we only provide atomic operation(e.g. webhook, 'simple' ScalingPolicy) in Senlin while leaving the work of combining these operations to support a use case to enduser/high-level service.
Thanks a lot for throwing this interesting question and I do agree that we should make more discussion about it to think whether we need to adjust our policy design to support this kind of scenario more smoothly.
--
Best regards,
Yanyan
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20151123/8802faee/attachment.html>
More information about the OpenStack-dev
mailing list