[openstack-dev] [Fuel][Infra] HA deployment tests on nodepool
Artur Kaszuba
akaszuba at mirantis.com
Wed Nov 25 13:55:29 UTC 2015
Hi All
W dniu 25.11.2015 o 14:00, Aleksandra Fedorova pisze:
> ...
> Thus the test layout as i see it should be the following:
>
> - one "manager vm" which will be responsible for the main test flow.
> On this vm we will:
> - build fuel-library package,
> - generate a repository,
> - get fuel-qa code,
> - run the actual test using based on job parameters,
> - collect test result and artifacts.
> - one vm with installed Fuel,
> - several bootstrapped nodes according to the test configuration -
> here probably we can switch from using nodes which are bootstrapped
> with Fuel to nodes which are just the most generic ones.
>
> As i got it from reading grenade job log [1], having manager vm is
> actually expected: there is a "main" node and list of subnodes which
> you can talk to from there. So this setup is not impossible, but
> requires support for requesting different types of nodes from
> nodepool. And we probably need to patch the Zuul_v3 spec in [2] to add
> this case to the format of request-accept calls?
>
> The other question here would be the network configuration: to
> function properly Fuel needs 5 (or 6?) different networks and we will
> need a way to configure them before we run the actual deployment. What
> are the options for configuring network on top of nodes provided by
> nodepool?
>
>
Some time ago i worked on similar solution, how to execute Fuel inside
other openstack instances. During my tests i have found problem with
network and passing vlans between openstack instances. This problem
could also appear when we want to use nodepool as a source of virtual
machines for deployment tests. I will try to show where my problem exist.
As a underlying Openstack infrastructure was used Fuel 6.0 executed on
HW nodes:
- one controler node
- 5 compute nodes
- Neutron with GRE segmentation
To reduce network issues all instances was executed on one compute node.
Test assumes that we execute fuel master and slaves as a Openstack
instances inside one dedicated tenant. Every test requires to create a
few networks and instances, it was easier to manage networks and
instances as a tenant group. At the end of the test we can just delete
all tenant objects.
Standard fuel installation requires to create few networks, i created:
- admin
- public
- mgmt
- storage
In standard Fuel installations networks could be attached directly to
interface or created as a VLAN on network interface. In this case i
wanted to reduce problems with additional tagging/fragmentation and each
of network was created as a separate neutron network with dedicated
interface inside instance.
This requires to create Neutron infrastructure with:
- router connected to external network
- 4 neutron networks
- public and admin network connected to router
Each of instance is connected to network by dedicated interface, it
looks like this:
+---------+
| Neutron | +~~~~~~~~+
+------| router |---|Internet|
| +---------+ +~~~~~~~~+
| |
+---------+ +---------+ +---------+ +---------+
| Neutron | | Neutron | | Neutron | | Neutron |
| network | | network | | network | | network |
| admin | | public | | mgmt | | storage |
+---------+ +---------+ +---------+ +---------+
| | | | |
| +--------+ | | |
| | | | |
+-----------+ +------------------------+
|Fuel Master| | Fuel slave X |
+-----------+ +------------------------+
In standard Openstack installation it is not possible to start DHCP
server on instance, it is blocked by firewall. To solve this problem i
changed neutron agent code to allow DHCP traffic.
Shortcut from procedure used to install fuel inside openstack:
- upload ISO Fuel 6.1 to glance
- upload ISO iPXE to glance
- create instance for fuel master server and install it
- create 2 instances for fuel slave nodes and boot them from iPXE
After those steps i got fuel installation with 2 nodes ready for
deployment, like this:
+------------------------------------------------------+
| Hardware node |
| +------------------------+ |
| | Fuel slave 1 | |
| | (KVM Instance) | |
| +------------------------+ |
| | | | | |
| +--------+ | | | |
| | | | | |
| +---------+ +---------+ +---------+ +---------+ |
| | Neutron | | Neutron | | Neutron | | Neutron | |
| | network | | network | | network | | network | |
| | admin | | public | | mgmt | | storage | |
| | (OVS) | | (OVS) | | (OVS) | | (OVS) | |
| +---------+ +---------+ +---------+ +---------+ |
| | | | | | |
| | +--------+ | | | |
| | | | | | |
| +-----------+ +------------------------+ |
| |Fuel Master| | Fuel slave 2 | |
| | (KVM) | | (KVM Instance) | |
| +-----------+ +------------------------+ |
| |
+------------------------------------------------------+
With this configuration it was possible to deploy new Openstack, but
some type of configurations not work correctly. Problem appears when i
deployed Openstack with VLAN separation, OSTF tests not passes.
+------------------------------------------------------+
| Hardware node |
| +---------------------------------------+ |
| | Fuel slave 1 +------------------+ | |
| | (KVM Instance) |Openstack Instance| | |
| | | TEST1 | | |
| | | (KVM) | | |
| | +------------------+ | |
| | | | |
| | +----------+ | |
| | | OVS | | |
| | | (VLAN) | | |
| | +----------+ | |
| | |tag:1000 | |
| +---------------------------ETH---------+ |
| | |
| +----------+ |
| | OVS | |
| | (GRE) | |
| +----------+ |
| | |
| +---------------------------ETH---------+ |
| | Fuel slave 2 |tag:1000 | |
| | (KVM Instance) +----------+ | |
| | | OVS | | |
| | | (VLAN) | | |
| | +----------+ | |
| | | | |
| | +------------------+ | |
| | |Openstack Instance| | |
| | | TEST2 | | |
| | | (KVM) | | |
| | +------------------+ | |
| +---------------------------------------+ |
| |
+------------------------------------------------------+
Problem exists when we want to send packets between instances TEST1 and
TEST2:
- packet is sent from TEST1
- OVS inside Fuel slave 1 needs to send this packet to other compute
node inside correct tenant network, it will add VLAN tag and then will
send it to the network interface
- OVS on hardware node will get tagged frame on OVS interface which is
not configured for tagging and OVS will drop it
To start using nodepool instances in deployment tests we need to have
sure that openstack used by nodepool allow to:
- send responses from DHCP server installed on fuel master to slave
instances, packets must pass by compute node firewall, in standard
configuration responses from DHCP server are possible only from network node
- we have solution which allow to pass tagged vlans between Openstack
Instances, without this we cannot execute all kind of tests and scenarios
Is it possible in actual Openstack infrastructure used by nodepool?
Or maybe we could solve it in other way?
--
Artur Kaszuba
More information about the OpenStack-dev
mailing list