[Openstack] [Metering] Agent configuration mechanism

Nick Barcet nick.barcet at canonical.com
Tue Jun 5 16:59:58 UTC 2012


On 06/05/2012 04:44 PM, Doug Hellmann wrote:
> On Tue, Jun 5, 2012 at 10:41 AM, Doug Hellmann
> <doug.hellmann at dreamhost.com <mailto:doug.hellmann at dreamhost.com>> wrote:
>     On Tue, Jun 5, 2012 at 9:56 AM, Nick Barcet
>     <nick.barcet at canonical.com <mailto:nick.barcet at canonical.com>> wrote:
> 
>         Following up on our last meeting, here is a proposal for centrally
>         hosting configuration of agents in ceilometer.
> 
>         The main idea is that all agents of a given type should be sending
>         similarly formatted information in order for the information to be
>         usable, hence the need to ensure that configuration info is
>         centrally
>         stored and retrieved.  This would rule out, in my mind, the idea
>         that we
>         could use the global flags object, as distribution of the
>         configuration
>         file is left to the cloud implementor and does not lend for easy and
>         synchronized updates of agent config.
> 
>         Configuration format and content is left to the agent's
>         implementation,
>         but it is assumed that each meter covered by an agent can be :
>          * enabled or disabled
>          * set to send information at a specified interval.
> 
> 
>     Right now we only have one interval for all polling. Do you think we
>     need to add support for polling different values at different
>     intervals? Do we need other per-agent settings, or are all of the
>     settings the same for all agents? (I had assumed the latter would be
>     all we needed.)

I would have thought that we may want to support different intervals per
meters, based on the billing rules that one may want to offer.  For
example, I may want to bill compute by the hour but floating IPs by the
day, hence have a different reporting interval for each.

>         1/ Configuration is stored for each agent in the database as follow
>         +-------------------------------------------------------------------+
>         | Field     | Type     | Note                                  
>             |
>         +-------------------------------------------------------------------+
>         | AgentType | String   | Unique agent type                      
>            |
>         | ConfVers  | Integer  | Version of the configuration          
>             |
>         | Config    | Text     | JSON Configuration info (defined by
>         agent) |
>         +-----------+----------+--------------------------------------------+
> 
>         2/ Config is retreived via the messaging queue upon boot once a day
>         (this should be defined in the global flags object) to check if the
>         config has changed.
> 
> 
>     Updating the config once a day is not going to be enough in an
>     environment with a lot of compute nodes.
> 
> 
> Two thoughts merged into one sentence there. Need more caffeine. 
> 
> What I was trying to say, was that updating the config once a day might
> not be enough and in environments with a lot of compute nodes going
> around to manually restart the services each time the config changes
> will be a pain. See below for more discussion of pushing config settings
> out.

Agreed, and that's why I proposed that the interval for confguration
refresh should be set in the Global object flag (this is something that
can be shared among all the agents).

> 
> 
>         Request sent by the agent upon boot and :
> 
>            'reply_to': 'get_config_data',
>            'correlation_id': xxxxx
>            'version': '1.0',
>            'args': {'data': {
>                       'AgentType': agent.type,
>                       'CurrentVersion': agent.version,
>                       'ConfigDefault': agent.default,
>                       },
>                    },
> 
> 
>     Is this a standard OpenStack RPC call?

Not sure about that, but if it can be, it would be easier :)

>         Where ConfigDefault are the "sane" default proposed by the agent
>         authors.
> 
> 
>     Why is the agent proposing default settings?

So that the first agent of a given type can populate its info with sane
defaults that can then be edited later on?

>         If no config record is found the collector creates the record, sets
>         ConfVers to 1 and sends back a normal reply.
> 
>         Reply sent by the collector:
>            'correlation_id': xxxxx
>            'version': '1.0',
> 
> 
>     Do we need minor versions for the config settings, or are those
>     simple sequence numbers to track which settings are the "most current"?

Simple sequence was what I was thinking about.

>            'args': {'data': {
>                       'Result': result.code,
>                       'ConfVers': ConfVers,
>                       'Config': Config,
>                       },
>                    },
>            }
> 
>         Result is set as follow:
>            200  -> Config was retrieved successfully
>            201  -> Config was created based on received default (Config
>         is empty)
>            304  -> Config version is identical to CurrentVersion (Config
>         is empty)
> 
> 
>     Why does the agent need to know the difference between those?
>     Shouldn't it simply use the settings it is given?

To avoid processing update code if the update is not needed?

>         This leaves open the question of having some UI to change the
>         config,
>         but I thing we can live with manual updating of the records for
>         the time
>         being.
> 
> 
>     Since we're using the service and RPC frameworks from nova
>     elsewhere, we have the option of issuing commands to all of the
>     agents from a central server. That would let us, for example, use a
>     cast() call to push a new configuration out to all of the agents at
>     once, on demand (from a command line program, for example).

Sounds nifty.  Let's amend.

>     I don't see the need for storing the configuration in the database.
>     It seems just as easy to have a configuration file on the central
>     server. The collector could read the file each time it is asked for
>     the agent configuration, and the command line program that pushes
>     config changes out could do the same.

Over engineering on my side, maybe.  You are right that the database is
NOT needed and we can do with a simple file, but then the collector
becomes state-full and HA considerations will start kicking in if we
want to have 2 collectors running in //.  If the DB is shared, the issue
is pushed to the DB, which will, hopefully, be redundant by nature.

>     Have you given any thought to distributing the secret value used for
>     signing incoming messages? A central configuration authority does
>     not give us a secure way to deliver secrets like that. If anyone
>     with access to the message queue can retrieve the key by sending RPC
>     requests, we might as well not sign the messages.

Actually, the private key used to generate a signature should be unique
to each host, if we want them to have any value at all, therefore
distributing a common signature should NOT be part of this, or we would
fall under the notion of a shared secret, which is, IMHO, not any better
than having a global password.

I would recommend that, for the time being, we just generate a random
key pair per host the first time the agent is run, allowing for someone
with further requirement to eventually populate this value by another
mean.

In any case, if we want to effectively check the signature, the public
key does need to be accessible by the collector to check it and have yet
to define a way to do so...  Proposals welcome, but again, while I think
we should lay the ground for a great security experience, we certainly
don't need to solve it all in v1.

Nick

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 900 bytes
Desc: OpenPGP digital signature
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20120605/26ef4f93/attachment.sig>


More information about the Openstack mailing list