[openstack-dev] Live migration/resizing summit talk

Joshua Harlow harlowja at yahoo-inc.com
Thu Apr 25 21:02:43 UTC 2013


Are there any blueprints/wikis for this new conductor usage?

I really don't want to diverge into 2 paths that are doing this.
Especially if said conductor work is 'happening soon'.

https://wiki.openstack.org/wiki/StructuredStateManagement is my idea for
this, and it'd be nice to not go 2 ways to solve the same problem.

This of course is all connected into heat and its convection
workflow-common library which nova should use to accomplish this
(hopefully).

On 4/25/13 1:21 PM, "Dan Smith" <dms at danplanet.com> wrote:

>> I remember there being talk about possible solutions, something along
>> the lines of a external entity (orchestration/conductorŠ whatever)
>> creating a session key for said hypervisors, then allowing
>> communication between those hypervisors to perform said operation for
>> a limited period of time. That¹s one approach, said
>> orchestration/conductor could also open a secure tunnel and tell the
>> hypervisors to use said channel (and then said
>> orchestration/conductor thingy could close that channel
>> automaticallyŠ) for communication.
>
>Yep, that is my recollection of the discussion as well.
>
>We're planning to start work on this soon. I think the first step is
>just moving and unifying all of the resize/migrate paths (which are
>currently separate) to conductor. This will give us better test
>coverage on the live path (which is currently at 0%) as well as give us
>a chance to refactor things in a way that reduces or eliminates state
>on the compute nodes.
>
>After we do that, I think we can look at ways to improve the security
>of the actual transport between compute nodes for migration.
>
>In the past, I've created systems where the third party
>(conductor in this case) installs temporary keys on the sending and
>receiving nodes for the duration of time they're expected to converse,
>and then revokes them after. This seems to be the most efficient
>(network-wise) way to go, but it does expose the receiver to the sender
>for a period of time. Given that the compute node(s) can't initiate the
>migration, this seems like the best option, weighing risk and
>performance, IMHO.
>
>The other mechanism we discussed, as you said, was proxying all that
>traffic through the intermediate node, which ends up duplicating a lot
>of network traffic, making the process take longer, and technically
>makes it more susceptible to a hardware failure while it's in transit.
>Another problem is that this could potentially require a different
>proxy implementation on the conductor for each hypervisor, which I
>think is rather smelly.
>
>I was planning to discuss breakdown of at least the first phase of this
>work in terms of blueprints at the nova meeting in forty minutes.
>
>--Dan
>
>_______________________________________________
>OpenStack-dev mailing list
>OpenStack-dev at lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




More information about the OpenStack-dev mailing list