[Openstack-operators] Issues with hybrid neutron ml2/ovs-agent ports after Icehouse upgrade
Robert van Leeuwen
Robert.vanLeeuwen at spilgames.com
Mon Oct 6 13:43:43 UTC 2014
> We run ml2 on the API nodes, but the openvswitch plugin/ovs-agent on the compute/network nodes.
> We ran this split setup because under Havana this was the only way we could get ml2 working correctly, and this setup was recommend by an ml2 dev.
> We kept this design because it continued to work under Icehouse, seemingly without issue. We upgraded from havana to icehouse without too much trouble a couple months ago.
I'm wondering if there should be any ML2 config /daemons on the clients?
As far as I know you still start the regular openvswitch deamons on the compute nodes.
You do not even need the ml2 package (we use RDO) for it.
It *looks* like it is working but I'm not sure if there is any way to verify it is doing any ML2 stuff...
> However, we had not rebooted any compute nodes since then until this week.
> When the compute nodes came back up, instances that had been created before moving to icehouse did not start up because the vif for them was not being created.
> Turns out this is because ports created under havana were missing the ‘hybrid’ property.
> And this was preventing the vif from being recreated on the compute host.
> The ports for instances created after the icehouse upgrade
> did have this property, and those instances started back up without a problem.
> Hope this may be useful info for somebody.
Thx for the heads up!
Facing the exact same issues in our dev environment when testing migration to ML2.
We are running Icehouse but did not migrate to ML2 yet.
In our setup instances spawned on Icehouse but without ML2 configured also face the same issues.
So it does not seem to be Havana related.
Still running with the "old-style" for the moment but are planning to move to ML2 in the next month.
Not exactly looking forward to that process :-(
Robert van Leeuwen
More information about the OpenStack-operators