[openstack-dev] [Nova][Neutron] nova-network in Icehouse and beyond

Jesse Pretorius jesse.pretorius at gmail.com
Thu Jan 30 07:33:56 UTC 2014

On 29 January 2014 23:13, Vishvananda Ishaya <vishvananda at gmail.com> wrote:

>  I see the process as going something like this:
> * Migrate network data from nova into neutron
> * Turn off nova-network on the node
> * Run the neutron l3 agent and trigger it to create the required bridges
> etc.
> * Use ovsctl to remove the vnic from the nova bridge and add it to the
> appropriate ovs bridge
> Because the ovs bridge and the nova bridge are plugged in to the same
> physical
> device, traffic flows appropriately.
> There is some hand waving above about how to trigger the l3 agent to
> create the
> ports and security groups properly, but I think conceptually it could work.

This is a task on my list to achieve in the next few months. The
non-trivial aspect is step one as you've described it here. I would really
appreciate it if a collaboration of those who know the nova-network data
and those who know the neutron data could be put together in order to write
some tooling to convert the network data.

Tasks which we've identified need to be done:

1) Convert existing networks
In our scenario we're using nova-network with VLANManager. Our target is a
Neutron setup with name spaces, GRE tunnels and OpenVSwitch. In some cases
networks need to be converted to provider networks to maintain
functionality, in other cases the networks can be converted to completely
virtual networks. The options here would point to the conversion requiring
some sort of selection for the network to convert in a particular way. The
selection being a single network or a set of networks.
A reasonable simple start would simply be to convert all networks the same

2) Convert existing port allocations
When switching from nova-network to neutron the instance NIC's end up
losing their port allocations. These have to be added, connecting them with
the same IP address to the same instance on the same network (after it's
converted). A specific requirement here is that the port assigned to the
instance must have the same MAC address as it had before, otherwise Windows
will require re-activation and linux will see the new port as an
alternative network device and add it as the next NIC instead of using the
same NIC that it already has.

3) Convert existing security groups
4) Convert existing security rules
5) Convert existing floating IP allocations

On 30 January 2014 08:14, Joshua Harlow <harlowja at yahoo-inc.com> wrote:

> 1. Take offline APIs & nova-compute (so new/existing VMs can't be
> scheduled/modified) -- existing running VMs will not be affected.
> 2. Save/dump nova database.
> 3. Translate nova database entries into corresponding neutron database
> entries.
> 4. Remove/adjust the *right* entries of the nova database.
> 5. Startup neutron+agents with database that it believes it was running
> with the whole time.
> 6. Restart nova-api & nova-compute (it will now never know that it was
> previously using nova-network).
> 7. Profit!

In our view, taking the API's and nova-compute off-line for the conversion
period is perfectly acceptable. This is, after all, a major plumbing change
in the architecture!

If we can't do this with all instances remaining online for most of the
line (there will have to be a slight disruption as the traffic flows change
to go through the L3 Agent), then ideally we should be able to convert a
single node at a time so that we can manage the disruption with our

My challenge is that I'm more of an operator than a developer. My python
skills would rate as 'noob' or perhaps at most 'bugfixer'. Ideally I need
the skills of a learned group with the same itch to scratch to work with to
make this happen. If such a Holy Grail is not found, I shall find a way but
it won't be pretty. ;)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140130/8203cb4b/attachment.html>

More information about the OpenStack-dev mailing list