<div dir="ltr"><div class="gmail_default" style="font-size:small"><br></div><div class="gmail_extra"><br><br><div class="gmail_quote">On Thu, Jan 9, 2014 at 2:34 PM, Nachi Ueno <span dir="ltr"><<a href="mailto:nachi@ntti3.com" target="_blank">nachi@ntti3.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi Doug<br>
<br>
2014/1/9 Doug Hellmann <<a href="mailto:doug.hellmann@dreamhost.com">doug.hellmann@dreamhost.com</a>>:<br>
<div class="im">><br>
><br>
><br>
> On Thu, Jan 9, 2014 at 1:53 PM, Nachi Ueno <<a href="mailto:nachi@ntti3.com">nachi@ntti3.com</a>> wrote:<br>
>><br>
>> Hi folks<br>
>><br>
>> Thank you for your input.<br>
>><br>
>> The key difference from external configuration system (Chef, puppet<br>
>> etc) is integration with<br>
>> openstack services.<br>
>> There are cases a process should know the config value in the other hosts.<br>
>> If we could have centralized config storage api, we can solve this issue.<br>
>><br>
>> One example of such case is neuron + nova vif parameter configuration<br>
>> regarding to security group.<br>
>> The workflow is something like this.<br>
>><br>
>> nova asks vif configuration information for neutron server.<br>
>> Neutron server ask configuration in neutron l2-agent on the same host<br>
>> of nova-compute.<br>
><br>
><br>
> That extra round trip does sound like a potential performance bottleneck,<br>
> but sharing the configuration data directly is not the right solution. If<br>
> the configuration setting names are shared, they become part of the<br>
> integration API between the two services. Nova should ask neutron how to<br>
> connect the VIF, and it shouldn't care how neutron decides to answer that<br>
> question. The configuration setting is an implementation detail of neutron<br>
> that shouldn't be exposed directly to nova.<br>
<br>
</div>I agree for nova - neutron if.<br>
However, neutron server and neutron l2 agent configuration depends on<br>
each other.</blockquote><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div class="im"><br>
> Running a configuration service also introduces what could be a single point<br>
> of failure for all of the other distributed services in OpenStack. An<br>
> out-of-band tool like chef or puppet doesn't result in the same sort of<br>
> situation, because the tool does not have to be online in order for the<br>
> cloud to be online.<br>
<br>
</div>We can choose same implementation. ( Copy information in local cache etc)<br>
<br>
Thank you for your input, I could organize my thought.<br>
My proposal can be split for the two bps.<br>
<br>
[BP1] conf api for the other process<br>
Provide standard way to know the config value in the other process in<br>
same host or the other host.<br></blockquote><div><br></div><div><div class="gmail_default" style="font-size:small">Please don't do this. It's just a bad idea to expose the configuration settings between apps this way, because it couples the applications tightly at a low level, instead of letting the applications have APIs for sharing logical information at a high level. It's the difference between asking "what is the value of a specific configuration setting on a particular hypervisor" and asking "how do I connect a VIF for this instance". The latter lets you provide different answers based on context. The former doesn't.</div>
<div class="gmail_default" style="font-size:small"><br></div><div class="gmail_default" style="font-size:small">Doug</div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
- API Example:<br>
conf.host('host1').firewall_driver<br>
<br>
- Conf file baed implementaion:<br>
config for each host will be placed in here.<br>
/etc/project/conf.d/{hostname}/agent.conf<br>
<br>
[BP2] Multiple backend for storing config files<br>
<br>
Currently, we have only file based configration.<br>
In this bp, we are extending support for config storage.<br>
- KVS<br>
- SQL<br>
- Chef - Ohai </blockquote><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
Best<br>
<span class="HOEnZb"><font color="#888888">Nachi<br>
</font></span><div class="HOEnZb"><div class="h5"><br>
> Doug<br>
><br>
><br>
>><br>
>><br>
>> host1<br>
>> neutron server<br>
>> nova-api<br>
>><br>
>> host2<br>
>> neturon l2-agent<br>
>> nova-compute<br>
>><br>
>> In this case, a process should know the config value in the other hosts.<br>
>><br>
>> Replying some questions<br>
>><br>
>> > Adding a config server would dramatically change the way that<br>
>> configuration management tools would interface with OpenStack services.<br>
>> [Jay]<br>
>><br>
>> Since this bp is just adding "new mode", we can still use existing config<br>
>> files.<br>
>><br>
>> > why not help to make Chef or Puppet better and cover the more OpenStack<br>
>> > use-cases rather than add yet another competing system [Doug, Morgan]<br>
>><br>
>> I believe this system is not competing system.<br>
>> The key point is we should have some standard api to access such services.<br>
>> As Oleg suggested, we can use sql-server, kv-store, or chef or puppet<br>
>> as a backend system.<br>
>><br>
>> Best<br>
>> Nachi<br>
>><br>
>><br>
>> 2014/1/9 Morgan Fainberg <<a href="mailto:m@metacloud.com">m@metacloud.com</a>>:<br>
>> > I agree with Doug’s question, but also would extend the train of thought<br>
>> > to<br>
>> > ask why not help to make Chef or Puppet better and cover the more<br>
>> > OpenStack<br>
>> > use-cases rather than add yet another competing system?<br>
>> ><br>
>> > Cheers,<br>
>> > Morgan<br>
>> ><br>
>> > On January 9, 2014 at 10:24:06, Doug Hellmann<br>
>> > (<a href="mailto:doug.hellmann@dreamhost.com">doug.hellmann@dreamhost.com</a>)<br>
>> > wrote:<br>
>> ><br>
>> > What capabilities would this new service give us that existing, proven,<br>
>> > configuration management tools like chef and puppet don't have?<br>
>> ><br>
>> ><br>
>> > On Thu, Jan 9, 2014 at 12:52 PM, Nachi Ueno <<a href="mailto:nachi@ntti3.com">nachi@ntti3.com</a>> wrote:<br>
>> >><br>
>> >> Hi Flavio<br>
>> >><br>
>> >> Thank you for your input.<br>
>> >> I agree with you. oslo.config isn't right place to have server side<br>
>> >> code.<br>
>> >><br>
>> >> How about oslo.configserver ?<br>
>> >> For authentication, we can reuse keystone auth and oslo.rpc.<br>
>> >><br>
>> >> Best<br>
>> >> Nachi<br>
>> >><br>
>> >><br>
>> >> 2014/1/9 Flavio Percoco <<a href="mailto:flavio@redhat.com">flavio@redhat.com</a>>:<br>
>> >> > On 08/01/14 17:13 -0800, Nachi Ueno wrote:<br>
>> >> >><br>
>> >> >> Hi folks<br>
>> >> >><br>
>> >> >> OpenStack process tend to have many config options, and many hosts.<br>
>> >> >> It is a pain to manage this tons of config options.<br>
>> >> >> To centralize this management helps operation.<br>
>> >> >><br>
>> >> >> We can use chef or puppet kind of tools, however<br>
>> >> >> sometimes each process depends on the other processes configuration.<br>
>> >> >> For example, nova depends on neutron configuration etc<br>
>> >> >><br>
>> >> >> My idea is to have config server in oslo.config, and let cfg.CONF<br>
>> >> >> get<br>
>> >> >> config from the server.<br>
>> >> >> This way has several benefits.<br>
>> >> >><br>
>> >> >> - We can get centralized management without modification on each<br>
>> >> >> projects ( nova, neutron, etc)<br>
>> >> >> - We can provide horizon for configuration<br>
>> >> >><br>
>> >> >> This is bp for this proposal.<br>
>> >> >> <a href="https://blueprints.launchpad.net/oslo/+spec/oslo-config-centralized" target="_blank">https://blueprints.launchpad.net/oslo/+spec/oslo-config-centralized</a><br>
>> >> >><br>
>> >> >> I'm very appreciate any comments on this.<br>
>> >> ><br>
>> >> ><br>
>> >> ><br>
>> >> > I've thought about this as well. I like the overall idea of having a<br>
>> >> > config server. However, I don't like the idea of having it within<br>
>> >> > oslo.config. I'd prefer oslo.config to remain a library.<br>
>> >> ><br>
>> >> > Also, I think it would be more complex than just having a server that<br>
>> >> > provides the configs. It'll need authentication like all other<br>
>> >> > services in OpenStack and perhaps even support of encryption.<br>
>> >> ><br>
>> >> > I like the idea of a config registry but as mentioned above, IMHO<br>
>> >> > it's<br>
>> >> > to live under its own project.<br>
>> >> ><br>
>> >> > That's all I've got for now,<br>
>> >> > FF<br>
>> >> ><br>
>> >> > --<br>
>> >> > @flaper87<br>
>> >> > Flavio Percoco<br>
>> >> ><br>
>> >> > _______________________________________________<br>
>> >> > OpenStack-dev mailing list<br>
>> >> > <a href="mailto:OpenStack-dev@lists.openstack.org">OpenStack-dev@lists.openstack.org</a><br>
>> >> > <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
>> >> ><br>
>> >><br>
>> >> _______________________________________________<br>
>> >> OpenStack-dev mailing list<br>
>> >> <a href="mailto:OpenStack-dev@lists.openstack.org">OpenStack-dev@lists.openstack.org</a><br>
>> >> <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
>> ><br>
>> ><br>
>> > _______________________________________________<br>
>> > OpenStack-dev mailing list<br>
>> > <a href="mailto:OpenStack-dev@lists.openstack.org">OpenStack-dev@lists.openstack.org</a><br>
>> > <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
>> ><br>
>> ><br>
>> > _______________________________________________<br>
>> > OpenStack-dev mailing list<br>
>> > <a href="mailto:OpenStack-dev@lists.openstack.org">OpenStack-dev@lists.openstack.org</a><br>
>> > <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
>> ><br>
><br>
><br>
</div></div></blockquote></div><br></div></div>