[openstack-dev] [Heat] Locking and ZooKeeper - a space oddysey
Chris Friesen
chris.friesen at windriver.com
Wed Oct 30 20:02:49 UTC 2013
On 10/30/2013 01:34 PM, Joshua Harlow wrote:
> To me u just made state consistency be a lock by another name.
>
> A lock protects a region of code from being mutually accessed
Personally I view a lock as protecting a set of data from being mutually
accessed.
> The question to me becomes what
> happens to that state consistency when its running in a distributed
> system, which all of openstack is running in. At that point u need a way
> to ensure multiple servers (going through various states) are not
> manipulating the same resources at the same time (delete volume from
> cinder, while attaching it in nova). Those 2 separate services do not
> likely share the same state transitions (and will likely not as they
> become tightly coupled at that point). So then u need some type of
> coordination system to ensure the ordering of these 2 resource actions
> is done in a consistent manner.
This sort of thing seems solvable by a "reserve-before-use" kind of
model, without needing any mutex locking as such.
When attaching, do an atomic
"check-if-owner-is-empty-and-store-instance-as-owner" transaction to
store the instance as the owner of the volume very early. Then reload
from the database to make sure the instance is the current owner, and
now you're guaranteed that nobody can delete it under your feet.
When deleting, if the current owner is set and the owner instance exists
then bail out with an error.
This is essentially akin to using atomic-test-and-set instead of a mutex.
Chris
More information about the OpenStack-dev
mailing list