[Openstack] [autoscaling][icehouse][OS::Heat::AutoScalingGroup]

Pavlo Shchelokovskyy pshchelokovskyy at mirantis.com
Thu Mar 5 16:27:50 UTC 2015


Hi,

two possibilities. You can check the interval in our
/etc/ceilometer/pipeline.yaml and decrease it to collect samples more
frequently (The default is sometimes 600 seconds, so on average you'd have
to wait about 15 min for autoscaling to kick in, 60 is good for dev
purposes, but not on a production :) ).
Another one is that you are filtering on the wrong metadata. You can search
the ceilometer samples by resource_id of one of the nova instances in your
asg and check that they indeed have metadata in the form of
metadata.user_metadata.stack = 31f62d11-401e-435b-a2a7-1e5318ce8159

Best regards,

Pavlo Shchelokovskyy
Software Engineer
Mirantis Inc
www.mirantis.com

On Thu, Mar 5, 2015 at 4:57 PM, Chinasubbareddy M <
chinasubbareddy_m at persistent.com> wrote:

> Hello all,
>
> I can see the alarm in alarm-history. However, no actions are triggered.
> Could you please help me out here ?
>
> Here is the alarm-show :
>
> root at build-server:~# ceilometer alarm-show --alarm_id
> ec8d3a5f-f890-465e-b796-dba2cf2c12fe
>
> +---------------------------+--------------------------------------------------------------------------+
> | Property                  | Value
>                             |
>
> +---------------------------+--------------------------------------------------------------------------+
> | alarm_actions             | [u'
> http://10.44.191.200:8000/v1/signal/arn%3Aopenstack%3Aheat%3A%3Aec3e9 |
> |                           |
> 3810a7c4be2881bd7ede428526a%3Astacks%2FasworkingFinal%2F31f62d11-401e-   |
> |                           |
> 435b-a2a7-1e5318ce8159%2Fresources%2Fweb_server_scaleup_policy?Timestamp |
> |                           |
> =2015-03-04T13%3A13%3A31Z&SignatureMethod=HmacSHA256&AWSAccessKeyId=8216 |
> |                           |
> 7ae13f3240e5aa6f1d6a5c4f39d3&SignatureVersion=2&Signature=fOH%2Fny5BpzbL |
> |                           | qwWce1qHyqAjBn9YXR0F%2FPlzeeJCdYc%3D']
>                              |
> | alarm_id                  | ec8d3a5f-f890-465e-b796-dba2cf2c12fe
>                              |
> | comparison_operator       | gt
>                              |
> | description               | Scale-up if the average CPU > 50% for 1
> minute                           |
> | enabled                   | True
>                              |
> | evaluation_periods        | 1
>                             |
> | exclude_outliers          | False
>                             |
> | insufficient_data_actions | []
>                              |
> | meter_name                | cpu_util
>                              |
> | name                      | asworkingFinal-cpu_alarm_high-swtpxt4coqme
>                              |
> | ok_actions                | []
>                              |
> | period                    | 60
>                              |
> | project_id                | ec3e93810a7c4be2881bd7ede428526a
>                              |
> | query                     | metadata.user_metadata.stack ==
> 31f62d11-401e-435b-a2a7-1e5318ce8159     |
> | repeat_actions            | True
>                              |
> | state                     | alarm
>                             |
> | statistic                 | avg
>                             |
> | threshold                 | 50.0
>                              |
> | type                      | threshold
>                             |
> | user_id                   | cac9bb03d57f41359df9b12c4b6d2318
>                              |
>
> +---------------------------+--------------------------------------------------------------------------+
>
> Here is the output of alarm history:
>
> | state transition | 2015-03-05T13:05:57.360308 | state: insufficient
> data                                    |
> | state transition | 2015-03-05T13:13:57.514774 | state: alarm
>                                     |
> | state transition | 2015-03-05T13:14:57.438538 | state: insufficient
> data                                    |
> | state transition | 2015-03-05T13:23:57.578754 | state: alarm
>                                     |
> | state transition | 2015-03-05T13:24:57.497549 | state: insufficient
> data                                    |
> | state transition | 2015-03-05T13:33:57.583486 | state: alarm
>                                     |
> | state transition | 2015-03-05T13:34:57.559243 | state: insufficient
> data                                    |
> | state transition | 2015-03-05T13:43:57.693379 | state: alarm
>                                     |
> | state transition | 2015-03-05T13:44:57.605715 | state: insufficient
> data                                    |
> | state transition | 2015-03-05T13:53:57.802353 | state: alarm
>                                     |
> | state transition | 2015-03-05T13:55:57.662511 | state: insufficient
> data                                    |
> | state transition | 2015-03-05T14:03:57.773131 | state: alarm
>                                     |
> | state transition | 2015-03-05T14:05:57.735085 | state: insufficient
> data                                    |
> | state transition | 2015-03-05T14:13:57.884780 | state: alarm
>                                     |
> | state transition | 2015-03-05T14:15:57.808676 | state: insufficient
> data                                    |
> | state transition | 2015-03-05T14:23:57.876373 | state: alarm
>                                     |
> | state transition | 2015-03-05T14:25:57.856860 | state: insufficient
> data                                    |
> | state transition | 2015-03-05T14:33:57.999285 | state: alarm
>                                     |
> | state transition | 2015-03-05T14:35:57.928619 | state: insufficient
> data                                    |
> | state transition | 2015-03-05T14:43:58.000641 | state: alarm
>                                     |
> | state transition | 2015-03-05T14:45:57.994070 | state: insufficient
> data                                    |
> | rule change      | 2015-03-05T14:46:28.934119 | repeat_actions: True
>                                     |
> | state transition | 2015-03-05T14:53:58.274025 | state: alarm
>                                     |
> | state transition | 2015-03-05T14:55:58.039142 | state: insufficient
> data                                    |
>
> +------------------+----------------------------+-------------------------------------------------------------+
>
> -----Original Message-----
> From: Chinasubbareddy M
> Sent: Thursday, March 05, 2015 5:38 PM
> To: 'Deepthi Dharwar'; openstack at lists.openstack.org
> Subject: RE: [Openstack] [openstack][autoscaling][icehouse][OS::Heat::
> AutoScalingGroup]
>
> Thank you Deepthi, its working now .
>
> -----Original Message-----
> From: Deepthi Dharwar [mailto:deepthi at linux.vnet.ibm.com]
> Sent: Wednesday, March 04, 2015 4:18 PM
> To: openstack at lists.openstack.org
> Subject: Re: [Openstack] [openstack][autoscaling][icehouse][OS::Heat::
> AutoScalingGroup]
>
> On 03/04/2015 02:36 PM, Chinasubbareddy M wrote:
> > Hi,
> >
> >
> >
> > I am testing openstack auto scaling function, but  the stack is
> > getting failed with below error,
> >
> >
> >
> > :Multiple possible networks found, use a Network ID to be more
> > specific
> >
> >
> >
> > could anybody tell me how to solve it? ,
> >
> > This is happening in case of using  OS::Heat::AutoScalingGroup
> > resource,  is there any way to mention network id in this resource?
>
> In your load balancer yaml, i.e lb_server.yaml Add the following:
>
> Under 'parameters' add network id
>
> parameters:
> private_net_id:
>     type: string
>     default: XXXX
>
>
> And under resources:
>
> member:
>     type: OS::Neutron::PoolMember
>     properties:
>       pool_id: {get_param: pool_id}
>       address: {get_attr: [server, first_address]}
>       protocol_port: 80
>   server_port:
>     type: OS::Neutron::Port
>     properties:
>       network_id: { get_param: private_net_id } => add the network id.
>       fixed_ips:
>         - subnet_id: { get_param: subnet_id }
>
> This should help you specify the network in which you want to plug in your
> VMs.
>
> Regards,
> Deepthi
>
>
>
> >
> >
> > Regards,
> >
> > Subbareddy,
> >
> > Persistent systems ltd.
> >
> > DISCLAIMER ========== This e-mail may contain privileged and
> > confidential information which is the property of Persistent Systems
> > Ltd. It is intended only for the use of the individual or entity to
> > which it is addressed. If you are not the intended recipient, you are
> > not authorized to read, retain, copy, print, distribute or use this
> > message. If you have received this communication in error, please
> > notify the sender and delete all copies of this message. Persistent
> > Systems Ltd. does not accept any liability for virus infected mails.
> >
> >
> >
> > _______________________________________________
> > Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> > Post to     : openstack at lists.openstack.org
> > Unsubscribe :
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> >
>
>
> _______________________________________________
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to     : openstack at lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
> DISCLAIMER
> ==========
> This e-mail may contain privileged and confidential information which is
> the property of Persistent Systems Ltd. It is intended only for the use of
> the individual or entity to which it is addressed. If you are not the
> intended recipient, you are not authorized to read, retain, copy, print,
> distribute or use this message. If you have received this communication in
> error, please notify the sender and delete all copies of this message.
> Persistent Systems Ltd. does not accept any liability for virus infected
> mails.
>
>
> _______________________________________________
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to     : openstack at lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20150305/1164e821/attachment.html>


More information about the Openstack mailing list