[Openstack-operators] Folsom to Grizzly Upgrade Nodes

Jonathan Proulx jon at jonproulx.com
Mon Sep 23 15:04:30 UTC 2013


On Mon, Sep 23, 2013 at 10:16 AM, Joe Topjian <joe.topjian at cybera.ca> wrote:
> Hi Jon,
>
> Awesome write-up. Thanks :)
>
> Could you give some detail as to how the actual migration from nova-network
> to quantum happened? For example, once Quantum was up and running, did
> instances simply get a renewed DHCP lease from the new DHCP server and
> happily go on their way? Did you have to manually re-allocate / associate
> used floating IPs in Quantum?

I could but I don't think what I did is useful to others. I actually
completely redid my networking setup when from a flat layout to vlans
and different IP space.  This upgrade is meant to by my last major
service disruption on the path from "beta" to "general release", and
is the only one that disrupted running VM all previous updates and
reconfigs only disrupted API operations not running systems.  Given
the scope of change and the "beta" expectation of the users I didn't
make any effort to get a smooth transition.

We've actually stopped using floating IPs since the default network is
now routable IPs, but allow users to specify their own fixed v4
addresses instead for systems that need/want consistent IP addrs.  We
provide for this by having quantum provide dhcp with a dynamic
allocation range that is the top half of the IP space and use our
existing in house IPAM and DNS to allow users to register addresses in
the lower half (this is exactly how we manage our other user subnets,
just we use our own DHCP there).  The IP space is now the same block
that was previously used for the floating IPs, so after a little
renumbering any DNS entry users had made for their floating IPs is now
available to them as a FIxed IP.

I understand there are other use cases for floating IPs than gust
getting a public mapping to the usually private fixed IP space and
that having self service IPAM and DNS for users is probably pretty
rare, so not sure if even that translates to other sites at all.

> Regarding your actual Quantum configuration: you're not using network
> namespaces, right?

We are using network namespaces as we are allowing projects to create
their own GRE based private networks.

> Also, have you run into the need to manually cleanup
> open-vswitch? For example, with the issue of instances getting multiple
> ports, did Quantum ever clean up after itself?

For the multiple port allocation bug we just declared the systems
broken, deleted and relaunched, I honestly don't remember if they were
reachable on any of the assigned IPs.  In that case (deleting the
instance) the network returned to a consistent state (AFAIK).

> Or have you had to manually
> audit the open-switch config versus what Quantum thinks open-vswitch should
> have versus what should really be configured all around?

I haven't noticed any issues like that.

Caveats here being this has only been running for a month and the main
supported use case is very simple, one provider vlan on one bridge, so
as long as the ports are created on the compute node and stuck in the
right bridge it works.

I've have added an additional vlan based provider network and that has
worked though that was last week and there are less than 10 ports on
it.  Also I and some users have played around with the GRE based
project networks adnd those also seem to work, though I don't have a
good sense of how much load they see.  I put them out there as "not
really supported but try it and let me know how it works", I can see
several projects using them some apparently dual porting all their
instances and noone has complained, except when I briefly broke
meta-data service on them, so they are using them and notice when they
break...

-Jon



More information about the OpenStack-operators mailing list