[openstack-dev] [ceilometer] Compute agent local VM inspector - potential enhancement

boden boden at linux.vnet.ibm.com
Fri Aug 1 12:37:07 UTC 2014


On 8/1/2014 4:37 AM, Eoghan Glynn wrote:
>
>
>> Heat cfntools is based on SSH, so I assume it requires TCP/IP connectivity
>> between VM and the central agent(or collector). But in the cloud, some
>> networks are isolated from infrastructure layer network, because of security
>> reasons. Some of our customers even explicitly require such security
>> protection. Does it mean those isolated VMs cannot be monitored by this
>> proposed-VM-agent?
>
> Yes, that sounds plausible to me.

My understanding is that this VM agent for ceilometer would need 
connectivity to nova API as well as to the AMQP broker. IMHO the 
infrastructure requirements from a network topology POV will differ from 
provider to provider and based on customer reqs / env.

>
> Cheers,
> Eoghan
>
>> I really wish we can figur out how it could work for all VMs but with no
>> security issues.
>>
>> I'm not familiar with heat-cfntools, so, correct me if I am wrong :)
>>
>>
>> Best regards!
>> Kurt
>>
>> -----邮件原件-----
>> 发件人: Eoghan Glynn [mailto:eglynn at redhat.com]
>> 发送时间: 2014年8月1日 14:46
>> 收件人: OpenStack Development Mailing List (not for usage questions)
>> 主题: Re: [openstack-dev] [ceilometer] Compute agent local VM inspector -
>> potential enhancement
>>
>>
>>
>>> Disclaimer: I'm not fully vested on ceilometer internals, so bear with me.
>>>
>>> For consumers wanting to leverage ceilometer as a telemetry service
>>> atop non-OpenStack Clouds or infrastructure they don't own, some edge
>>> cases crop up. Most notably the consumer may not have access to the
>>> hypervisor host and therefore cannot leverage the ceilometer compute
>>> agent on a per host basis.
>>
>> Yes, currently such access to the hypervisor host is required, least in the
>> case of the libvirt-based inspector.
>>
>>> In such scenarios it's my understanding the main option is to employ
>>> the central agent to poll measurements from the monitored resources
>>> (VMs, etc.).
>>
>> Well, the ceilometer central agent is not generally concerned with with
>> polling related *directly* to VMs - rather it handles acquiring data from
>> RESTful API (glance, neutron etc.) that are not otherwise available in the
>> form of notifications, and also from host-level interfaces such as SNMP.
>>

Thanks for additional clarity. Perhaps this proposed local VM agent 
fills additional use cases whereupon ceilometer is being used without 
openstack proper (e.g. not a full set of openstack complaint services 
like neutron, glance, etc.).

>>> However this approach requires Cloud APIs (or other mechanisms) which
>>> allow the polling impl to obtain the desired measurements (VM memory,
>>> CPU, net stats, etc.) and moreover the polling approach has it's own
>>> set of pros / cons from a arch / topology perspective.
>>
>> Indeed.
>>
>>> The other potential option is to setup the ceilometer compute agent
>>> within the VM and have each VM publish measurements to the collector
>>> -- a local VM agent / inspector if you will. With respect to this
>>> local VM agent approach:
>>> (a) I haven't seen this documented to date; is there any desire / reqs
>>> to support this topology?
>>> (b) If yes to #a, I whipped up a crude PoC here:
>>> http://tinyurl.com/pqjgotv  Are folks willing to consider a BP for
>>> this approach?
>>
>> So in a sense this is similar to the Heat cfn-push-stats utility[1] and
>> seems to suffer from the same fundamental problem, i.e. the need for
>> injection of credentials (user/passwds, keys, whatever) into the VM in order
>> to allow the metric datapoints be pushed up to the infrastructure layer
>> (e.g. onto the AMQP bus, or to a REST API endpoint).
>>
>> How would you propose to solve that credentialing issue?
>>

My initial approximation would be to target use cases where end users do 
not have direct guest access or have limited guest access such that 
their UID / GID cannot access the conf file. For example instances which 
only provide app access provisioned using heat SoftwareDeployments 
(http://tinyurl.com/qxmh2of) or trove database instances.

In general I don't see this approach from a security POV much different 
than whats done with the trove guest agent (http://tinyurl.com/ohvtmtz).

Longer term perhaps credentials could be mitigated using Barbican as 
suggested here: https://bugs.launchpad.net/nova/+bug/1158328

>> Cheers,
>> Eoghan
>>
>> [1]
>> https://github.com/openstack/heat-cfntools/blob/master/bin/cfn-push-stats
>>
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>>
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>





More information about the OpenStack-dev mailing list