<html><body><div style="color:#000; background-color:#fff; font-family:HelveticaNeue, Helvetica Neue, Helvetica, Arial, Lucida Grande, sans-serif;font-size:16px"><div dir="ltr" id="yui_3_16_0_1_1418674624821_9982">Hi there,</div><div id="yui_3_16_0_1_1418674624821_10120" dir="ltr"><br></div><div id="yui_3_16_0_1_1418674624821_10121" dir="ltr">I'm new to the list, and trying to get more information about the following issue:<br></div><div id="yui_3_16_0_1_1418674624821_10103" dir="ltr"><a id="yui_3_16_0_1_1418674624821_10090" style="" class="" href="https://bugs.launchpad.net/heat/+bug/1353670" target="_blank"><br></a></div><div id="yui_3_16_0_1_1418674624821_10118" dir="ltr"><a id="yui_3_16_0_1_1418674624821_10090" style="" class="" href="https://bugs.launchpad.net/heat/+bug/1353670" target="_blank">https://bugs.launchpad.net/<wbr style="" class="">heat/+bug/1353670</a></div><div id="yui_3_16_0_1_1418674624821_10122" dir="ltr"><br></div><div id="yui_3_16_0_1_1418674624821_10128" dir="ltr">Is there anyone on the list who can explain under what conditions a user might hit this? Workarounds? ETA for a fix?</div><div id="yui_3_16_0_1_1418674624821_10130"><br></div><div dir="ltr" id="yui_3_16_0_1_1418674624821_10125">Thanks!</div><div id="yui_3_16_0_1_1418674624821_10126" dir="ltr"><br></div><div id="yui_3_16_0_1_1418674624821_10127" dir="ltr">- Ari<br></div><div class="qtdSeparateBR"><br><br></div><div style="display: block;" class="yahoo_quoted"> <div style="font-family: HelveticaNeue, Helvetica Neue, Helvetica, Arial, Lucida Grande, sans-serif; font-size: 16px;"> <div style="font-family: HelveticaNeue, Helvetica Neue, Helvetica, Arial, Lucida Grande, sans-serif; font-size: 16px;"> <div dir="ltr"> <font face="Arial" size="2"> On Monday, December 15, 2014 11:30 AM, Clint Byrum <clint@fewbar.com> wrote:<br> </font> </div> <br><br> <div class="y_msg_container">Excerpts from Anant Patil's message of 2014-12-15 07:15:30 -0800:<br>> On 13-Dec-14 05:42, Zane Bitter wrote:<br>> > On 12/12/14 05:29, Murugan, Visnusaran wrote:<br>> >><br>> >><br>> >>> -----Original Message-----<br>> >>> From: Zane Bitter [mailto:<a ymailto="mailto:zbitter@redhat.com" href="mailto:zbitter@redhat.com">zbitter@redhat.com</a>]<br>> >>> Sent: Friday, December 12, 2014 6:37 AM<br>> >>> To: <a ymailto="mailto:openstack-dev@lists.openstack.org" href="mailto:openstack-dev@lists.openstack.org">openstack-dev@lists.openstack.org</a><br>> >>> Subject: Re: [openstack-dev] [Heat] Convergence proof-of-concept<br>> >>> showdown<br>> >>><br>> >>> On 11/12/14 08:26, Murugan, Visnusaran wrote:<br>> >>>>>> [Murugan, Visnusaran]<br>> >>>>>> In case of rollback where we have to cleanup earlier version of<br>> >>>>>> resources,<br>> >>>>> we could get the order from old template. We'd prefer not to have a<br>> >>>>> graph table.<br>> >>>>><br>> >>>>> In theory you could get it by keeping old templates around. But that<br>> >>>>> means keeping a lot of templates, and it will be hard to keep track<br>> >>>>> of when you want to delete them. It also means that when starting an<br>> >>>>> update you'll need to load every existing previous version of the<br>> >>>>> template in order to calculate the dependencies. It also leaves the<br>> >>>>> dependencies in an ambiguous state when a resource fails, and<br>> >>>>> although that can be worked around it will be a giant pain to implement.<br>> >>>>><br>> >>>><br>> >>>> Agree that looking to all templates for a delete is not good. But<br>> >>>> baring Complexity, we feel we could achieve it by way of having an<br>> >>>> update and a delete stream for a stack update operation. I will<br>> >>>> elaborate in detail in the etherpad sometime tomorrow :)<br>> >>>><br>> >>>>> I agree that I'd prefer not to have a graph table. After trying a<br>> >>>>> couple of different things I decided to store the dependencies in the<br>> >>>>> Resource table, where we can read or write them virtually for free<br>> >>>>> because it turns out that we are always reading or updating the<br>> >>>>> Resource itself at exactly the same time anyway.<br>> >>>>><br>> >>>><br>> >>>> Not sure how this will work in an update scenario when a resource does<br>> >>>> not change and its dependencies do.<br>> >>><br>> >>> We'll always update the requirements, even when the properties don't<br>> >>> change.<br>> >>><br>> >><br>> >> Can you elaborate a bit on rollback.<br>> > <br>> > I didn't do anything special to handle rollback. It's possible that we <br>> > need to - obviously the difference in the UpdateReplace + rollback case <br>> > is that the replaced resource is now the one we want to keep, and yet <br>> > the replaced_by/replaces dependency will force the newer (replacement) <br>> > resource to be checked for deletion first, which is an inversion of the <br>> > usual order.<br>> > <br>> <br>> This is where the version is so handy! For UpdateReplaced ones, there is<br>> an older version to go back to. This version could just be template ID,<br>> as I mentioned in another e-mail. All resources are at the current<br>> template ID if they are found in the current template, even if they is<br>> no need to update them. Otherwise, they need to be cleaned-up in the<br>> order given in the previous templates.<br>> <br>> I think the template ID is used as version as far as I can see in Zane's<br>> PoC. If the resource template key doesn't match the current template<br>> key, the resource is deleted. The version is misnomer here, but that<br>> field (template id) is used as though we had versions of resources.<br>> <br>> > However, I tried to think of a scenario where that would cause problems <br>> > and I couldn't come up with one. Provided we know the actual, real-world <br>> > dependencies of each resource I don't think the ordering of those two <br>> > checks matters.<br>> > <br>> > In fact, I currently can't think of a case where the dependency order <br>> > between replacement and replaced resources matters at all. It matters in <br>> > the current Heat implementation because resources are artificially <br>> > segmented into the current and backup stacks, but with a holistic view <br>> > of dependencies that may well not be required. I tried taking that line <br>> > out of the simulator code and all the tests still passed. If anybody can <br>> > think of a scenario in which it would make a difference, I would be very <br>> > interested to hear it.<br>> > <br>> > In any event though, it should be no problem to reverse the direction of <br>> > that one edge in these particular circumstances if it does turn out to <br>> > be a problem.<br>> > <br>> >> We had an approach with depends_on<br>> >> and needed_by columns in ResourceTable. But dropped it when we figured out<br>> >> we had too many DB operations for Update.<br>> > <br>> > Yeah, I initially ran into this problem too - you have a bunch of nodes <br>> > that are waiting on the current node, and now you have to go look them <br>> > all up in the database to see what else they're waiting on in order to <br>> > tell if they're ready to be triggered.<br>> > <br>> > It turns out the answer is to distribute the writes but centralise the <br>> > reads. So at the start of the update, we read all of the Resources, <br>> > obtain their dependencies and build one central graph[1]. We than make <br>> > that graph available to each resource (either by passing it as a <br>> > notification parameter, or storing it somewhere central in the DB that <br>> > they will all have to read anyway, i.e. the Stack). But when we update a <br>> > dependency we don't update the central graph, we update the individual <br>> > Resource so there's no global lock required.<br>> > <br>> > [1] <br>> > <a href="https://github.com/zaneb/heat-convergence-prototype/blob/distributed-graph/converge/stack.py#L166-L168" target="_blank">https://github.com/zaneb/heat-convergence-prototype/blob/distributed-graph/converge/stack.py#L166-L168</a><br>> > <br>> <br>> A centralized graph and decision making will make the implementation far<br>> more simpler than distributed. This looks academic, but the simplicity<br>> beats everything! When each worker has to decide, there needs to be<br>> lock, only DB transactions are not enough. In contrast, when the<br>> decision making is centralized, that particular critical section can be<br>> attempted with transaction and re-attempted if needed.<br>> <br><br>I'm concerned that we're losing sight of the whole point of convergence.<br><br>Yes, concurrency is hard, and state management is really the only thing<br>hard about concurrency.<br><br>What Zane is suggesting is a lock-free approach commonly called 'Read<br>Copy Update' or "RCU". It has high reader concurrency, but relies on<br>there only being one writer. It is quite simple, and has proven itself<br>enough to even be included in the Linux kernel:<br><br><a href="http://lwn.net/Articles/263130/" target="_blank">http://lwn.net/Articles/263130/</a><br><br>> With the distributed approach, I see following drawbacks:<br>> 1. Every time a resource is done, the peer resources (siblings) are<br>> checked to see if they are done and the parent is propagated. This<br>> happens for each resource.<br><br>This is slightly inaccurate.<br><br>Every time a resource is done, resources that depend on that resource<br>are checked to see if they still have any unresolved dependencies.<br><br>> 2. The worker has to run through all the resources to see if the stack<br>> is done, to mark it as completed.<br><br>If a worker completes a resource which has no dependent resources, it<br>only needs to check to see if all of the other edges of the graph are<br>complete to mark the state as complete. There is no full traversal<br>unless you want to make sure nothing regressed, which is not the way any<br>of the specs read.<br><br>> 3. The decision to converge is made by each worker resulting in lot of<br>> contention. The centralized graph restricts the contention point to one<br>> place where we can use DB transactions. It is easier to maintain code<br>> where particular decisions are made at a place rather than at many<br>> places.<br><br>Why not let multiple workers use DB transactions? The contention happens<br>_only if it needs to happen to preserve transaction consistency_ instead<br>of _always_.<br><br>> 4. The complex part we are trying to solve is to decide on what to do<br>> next when a resource is done. With centralized graph, this is abstracted<br>> out to the DB API. The API will return the next set of nodes. A smart<br>> SQL query can reduce a lot of logic currently being coded in<br>> worker/engine.<br><br>Having seen many such "smart" SQL queries, I have to say, this is<br>terrifying. Complexity in database access is by far the single biggest<br>obstacle to scaling out.<br><br>I don't really know why you'd want logic to move into the database. It<br>is the one place that you must keep simple in order to scale. We can<br>scale out python like crazy.. but SQL is generally a single threaded,<br>impossible to debug component. So make the schema obvious and accesses<br>to it straight forward.<br><br>I think we need to land somewhere between the two approaches though.<br>Here is my idea for DB interaction, I realize now it's been in my head<br>for a while but I never shared:<br><br>CREATE TABLE resource (<br> id ...,<br> ! all the stuff now<br> version int,<br> replaced_by int,<br> complete_order int,<br> primary key (id, version),<br> key idx_replaced_by (replaced_by));<br><br>CREATE TABLE resource_dependencies (<br> id ....,<br> version int,<br> needed_by ...<br> primary key (id, version, needed_by));<br><br>Then completion happens something like this:<br><br>BEGIN<br>SELECT @complete_order := MAX(complete_order) FROM resource WHERE stack_id = :stack_id:<br>SET @complete_order := @complete_order + 1<br>UPDATE resource SET complete_order = @complete_order, state='COMPLETE' WHERE id=:id: AND version=:version:;<br>! if there is a replaced_version<br>UPDATE resource SET replaced_by=:version: WHERE id=:id: AND version=:replaced_version:;<br>SELECT DISTINCT r.id FROM resource r INNER JOIN resource_dependencies rd<br> ON r.id = rd.resource_id AND r.version = rd.version<br>WHERE r.version=:version: AND rd.needed_by=:id: AND r.state != 'COMPLETE'<br><python><br>for id in results:<br> convergequeue.submit(id)<br></python><br>COMMIT<br><br>Perhaps I've missed some revelation that makes this hard or impossible.<br>But I don't see a ton of database churn (one update per completion is<br>meh). I also don't see a lot of complexity in the query. The<br>complete_order can be used to handle deletes in the right order later<br>(note that I know that is probably the wrong way to do it and sequences<br>are a thing that can be used for this).<br><br>> 5. What would be the starting point for resource clean-up? The clean-up<br>> has to start when all the resources are updated. With no centralized<br>> graph, the DB has to be searched for all the resources with no<br>> dependencies and with older versions (or having older template keys) and<br>> start removing them. With centralized graph, this would be a simpler<br>> with a SQL queries returning what needs to be done. The search space for<br>> where to start with clean-up will be huge.<br><br>"Searched" may be the wrong way. With the table structure above, you can<br>find everything to delete with this query:<br><br>! Outright deletes<br>SELECT r_old.id<br>FROM resource r_old LEFT OUTER JOIN resource r_new<br> ON r_old.id = r_new.id AND r_old.version = :cleanup_version:<br>WHERE r_new.id IS NULL OR r_old.replaced_by IS NOT NULL<br>ORDER BY DESC r.complete_order;<br><br>That should delete everything in more or less the right order. I think<br>for that one you can just delete the rows as they're confirmed deleted<br>from the plugins, no large transaction needed since we'd not expect<br>these rows to be updated anymore.<br><br>> 6. When engine restarts, the search space on where to start will be<br>> huge. With a centralized graph, the abstracted API to get next set of<br>> nodes makes the implementation of decision simpler.<br>> <br>> I am convinced enough that it is simpler to assign the responsibility to<br>> engine on what needs to be done next. No locks will be required, not<br>> even resource locks! It is simpler from implementation, understanding<br>> and maintenance perspective.<br>> <br><br>I thought you started saying you would need locks, but now saying you<br>won't. I agree no abstract locking is needed, just a consistent view of<br>the graph in the DB.<br><br>_______________________________________________<br>OpenStack-dev mailing list<br><a ymailto="mailto:OpenStack-dev@lists.openstack.org" href="mailto:OpenStack-dev@lists.openstack.org">OpenStack-dev@lists.openstack.org</a><br><a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br><br><br></div> </div> </div> </div> </div></body></html>