<div dir="ltr"><br><div class="gmail_extra"><br><br><div class="gmail_quote">On Thu, Sep 12, 2013 at 7:30 PM, Michael Basnight <span dir="ltr"><<a href="mailto:mbasnight@gmail.com" target="_blank">mbasnight@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div class="im">On Sep 12, 2013, at 2:39 AM, Thierry Carrez wrote:<br>
<br>
> Sergey Lukjanov wrote:<br>
><br>
>> [...]<br>
>> As you can see, resources provisioning is just one of the features and the implementation details are not critical for overall architecture. It performs only the first step of the cluster setup. We’ve been considering Heat for a while, but ended up direct API calls in favor of speed and simplicity. Going forward Heat integration will be done by implementing extension mechanism [3] and [4] as part of Icehouse release.<br>
>><br>
>> The next part, Hadoop cluster configuration, already extensible and we have several plugins - Vanilla, Hortonworks Data Platform and Cloudera plugin started too. This allow to unify management of different Hadoop distributions under single control plane. The plugins are responsible for correct Hadoop ecosystem configuration at already provisioned resources and use different Hadoop management tools like Ambari to setup and configure all cluster services, so, there are no actual provisioning configs on Savanna side in this case. Savanna and its plugins encapsulate the knowledge of Hadoop internals and default configuration for Hadoop services.<br>
><br>
> My main gripe with Savanna is that it combines (in its upcoming release)<br>
> what sounds like to me two very different services: Hadoop cluster<br>
> provisioning service (like what Trove does for databases) and a<br>
> MapReduce+ data API service (like what Marconi does for queues).<br>
><br>
> Making it part of the same project (rather than two separate projects,<br>
> potentially sharing the same program) make discussions about shifting<br>
> some of its clustering ability to another library/project more complex<br>
> than they should be (see below).<br>
><br>
> Could you explain the benefit of having them within the same service,<br>
> rather than two services with one consuming the other ?<br>
<br>
</div>And for the record, i dont think that Trove is the perfect fit for it today. We are still working on a clustering API. But when we create it, i would love the Savanna team's input, so we can try to make a pluggable API thats usable for people who want MySQL or Cassandra or even Hadoop. Im less a fan of a clustering library, because in the end, we will both have API calls like POST /clusters, GET /clusters, and there will be API duplication between the projects.<br>
<div class="im"><br></div></blockquote>I think that Cluster API (if it would be created) will be helpful not only for Trove and Savanna. NoSQL, RDBMS and Hadoop are not unique software which can be clustered. What about different kind of messaging solutions like RabbitMQ, ActiveMQ or J2EE containers like JBoss, Weblogic and WebSphere, which often are installed in clustered mode. Messaging, databases, J2EE containers and Hadoop have their own management cycle. It will be confusing to make Cluster API a part of Trove which has different mission - database management and provisioning. <blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<div class="im">
><br>
>> The next topic is “Cluster API”.<br>
>><br>
>> The concern that was raised is how to extract general clustering functionality to the common library. Cluster provisioning and management topic currently relevant for a number of projects within OpenStack ecosystem: Savanna, Trove, TripleO, Heat, Taskflow.<br>
>><br>
>> Still each of the projects has their own understanding of what the cluster provisioning is. The idea of extracting common functionality sounds reasonable, but details still need to be worked out.<br>
>><br>
>> I’ll try to highlight Savanna team current perspective on this question. Notion of “Cluster management” in my perspective has several levels:<br>
>> 1. Resources provisioning and configuration (like instances, networks, storages). Heat is the main tool with possibly additional support from underlying services. For example, instance grouping API extension [5] in Nova would be very useful.<br>
>> 2. Distributed communication/task execution. There is a project in OpenStack ecosystem with the mission to provide a framework for distributed task execution - TaskFlow [6]. It’s been started quite recently. In Savanna we are really looking forward to use more and more of its functionality in I and J cycles as TaskFlow itself getting more mature.<br>
>> 3. Higher level clustering - management of the actual services working on top of the infrastructure. For example, in Savanna configuring HDFS data nodes or in Trove setting up MySQL cluster with Percona or Galera. This operations are typical very specific for the project domain. As for Savanna specifically, we use lots of benefits of Hadoop internals knowledge to deploy and configure it properly.<br>
>><br>
>> Overall conclusion it seems to be that it make sense to enhance Heat capabilities and invest in Taskflow development, leaving domain-specific operations to the individual projects.<br>
><br>
> The thing we'd need to clarify (and the incubation period would be used<br>
> to achieve that) is how to reuse as much as possible between the various<br>
> cluster provisioning projects (Trove, the cluster side of Savanna, and<br>
> possibly future projects). Solution can be to create a library used by<br>
> Trove and Savanna, to extend Heat, to make Trove the clustering thing<br>
> beyond just databases...<br>
><br>
> One way of making sure smart and non-partisan decisions are taken in<br>
> that area would be to make Trove and Savanna part of the same program,<br>
> or make the clustering part of Savanna part of the same program as<br>
> Trove, while the data API part of Savanna could live separately (hence<br>
> my question about two different projects vs. one project above).<br>
<br>
</div>Trove is not, nor will be, a data API. Id like to keep Savanna in its own program, but I could easily see them as being a big data / data processing program, while Trove is a cluster provisioning / scaling / administration / "keep it online" program.<br>
<div class="im"><br>
><br>
>> I also would like to emphasize that in Savanna Hadoop cluster management is already implemented including scaling support.<br>
>><br>
>> With all this I do believe Savanna fills an important gap in OpenStack by providing Data Processing capabilities in cloud environment in general and integration with Hadoop ecosystem as the first particular step.<br>
><br>
> For incubation we bless the goal of the project and the promise that it<br>
> will integrate well with the other existing projects. A<br>
> perfectly-working project can stay in incubation until it achieves<br>
> proper integration and avoids duplication of functionality with other<br>
> integrated projects. A perfectly-working project can also happily live<br>
> outside of OpenStack integrated release if it prefers a more standalone<br>
> approach.<br>
<br>
</div>A good example. Our instance provisioning was also implemented in Trove, but the goal is to use Heat. So the TC asked us to use Heat for instance provisioning, and we outlined a set of goals to achieve before we went to Integrated status.<br>
<div class=""><div class="h5"><br>
> I think there is value in having Savanna in incubation so that we can<br>
> explore those avenues of collaboration between projects. It may take<br>
> more than one cycle of incubation to get it right (in fact, I would not<br>
> be surprised at all if it took us more than one cycle to properly<br>
> separate the roles between Trove / Taskflow / heat / clusterlib). During<br>
> this exploration, Savanna devs may also decide that integration is very<br>
> costly and that their immediate time is better spent adding key<br>
> features, and drop from the incubation track. But in all cases,<br>
> incubation sounds like the right first step to get everyone around the<br>
> same table.<br>
><br>
> --<br>
> Thierry Carrez (ttx)<br>
><br>
> _______________________________________________<br>
> OpenStack-dev mailing list<br>
> <a href="mailto:OpenStack-dev@lists.openstack.org">OpenStack-dev@lists.openstack.org</a><br>
> <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
<br>
</div></div><br>_______________________________________________<br>
OpenStack-dev mailing list<br>
<a href="mailto:OpenStack-dev@lists.openstack.org">OpenStack-dev@lists.openstack.org</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
<br></blockquote></div><br></div></div>